Most agree that in webapps one cannot lock rows to be edited ahead of time. Optimistic concurrency is the typical choice here. So, it is then the job of the appdev to think about how to handle contention when it does occur. I've found scenarios when it would have been useful to know the rows-modified count for an update or delete:

Models have a "version" field, and we'd like to prevent two requests from clobbering each other with updates from the same version. These race conditions are generally the way of the web when two people are allowed to edit data simultaneously, but there are times when the data is a little more critical and we'd like to be able to present the conflict to the 2nd user trying to update (e.g., bugzilla mid-air collisions). In this case, it would be nice to be able to save() only on the precondition that the version in the DB hasn't been updated. At a low level, this means something like UPDATE ... WHERE ID = xxx AND VERSION = xxx, and then the rows-modified count would indicate whether it worked or not.

When we really want to know that a delete() call deleted data. Django ORM delete() passes silently if the object is already gone. However, there are cases when we want to know that the current request actually did the delete. One example is deleting OAuth auth codes: they should be used once and only once. (Yes, we could just mark it as used, but that still leaves us with the update/versioning problem above... and this is just an example). If two requests retrieve the auth code and otherwise believe they deleted (used) it, this would be considered a potential security problem in OAuth.

In both cases, having access to rows modified before commit would help greatly towards application-specific optimistic concurrency control handling. IFAICT, only that answer from the DB driver can tell us if the current request actually got to do a given data update. In looking at the django.db.models code it looks like rows-modified is available deep in the code, where DB specific connections and cursors are used, but is otherwise ignored there.

New Model method save_if() that accepts a precondition, such as version=5 (which would be more like F('version')==5 under the covers, obviously we want to compare the actual value in storage). If the precondition fails, return MyModel.PreconditionFailed, which extends ObjectPreconditionFailed. In my example above I would probably translate that to a model-specific VersionObsolete exception, or similar.

Thoughts?

I can work on patches, but I wanted to log this first and see what kind of reaction it got before spending the time on that.

Update from comment thread: The issue of delete returning counts (update already does) has been moved to #16891. This ticket's scope is now strictly the idea of a conditional save.

There is some proof-of-concept code of how to do optimistic locking using pre_save and pre_delete signals. The code is at ​https://github.com/akaariai/django_optimistic_lock. I think the approach taken in the POC code should work. However, the code is nothing more than POC. The code is really ugly and will probably eat your data. Probably does not work at all on any database other than PostgreSQL.

I see where you're going with that field but I'm hesitant to agree that doing things with a version field are exactly what everybody wants to do. I was going for the more primitive aspects of things: the precondition and rows modified. I'm hoping we gather more use-cases on the django-devs thread though to be more sure.

I accidentally double-clicked submit and my second submit was presented with a "this ticket has already been modified by someone else". So apparently *this* system does version checking. Is this implemented in Django, should we take notes? :)

One can implement any logic in the signal callback. But it does take too much work to implement such a field. Also, relying on catching the exceptions from the save is just plain wrong (it results in transaction commit all too easily). The optimistic lock check should be done in model validation if possible.

A conditional save would be handy. But it is not trivial to implement. The implementation should be one that handles derived classes, does not easily lead to half-updates (the condition was ok for the first table in derived save, but not for the second) and does not lead to too much code duplication. Having two different versions of base_save does not look appealing. Are there enough use cases that are not solvable by overriding the save() method or signals?

It is a good idea to return models.UPDATED or models.INSERTED from save(), returning models.DELETED or models.NO_ACTION from delete() and modified object counts from qs.update(), qs.delete(). Row counts instead of object counts from qs.update() and qs.delete() is a bit harder problem. The object count would not include related models deleted or updated. The return values from save() and delete() are just examples.

Also, relying on catching the exceptions from the save is just plain wrong (it results in transaction commit all too easily).

I don't follow this statement. Catching exceptions results in commits? Do you mean that the programmer might not think about whether or not they want to proceed with other operations or rollback a transaction on condition failure? This does highlight the need to ensure that this feature's interaction with transaction management is clear.

A conditional save would be handy. But it is not trivial to implement. The implementation should be one that handles derived classes, does not easily lead to half-updates (the condition was ok for the first table in derived save, but not for the second) and does not lead to too much code duplication.

I think the answer there is to check the precondition first and cut off the whole operation if it fails. You give the precondition to the immediate object. Even if we devise a way to specify preconditions for the cascaded updates of related objects, those updates would have the opportunity to rollback the transaction. I agree on code duplication. My goal is that you can generally use a provided ModelConcurrencySafe.save() and provide simple precondition expressions.

Having two different versions of base_save does not look appealing.

Agreed. Could also be save(precondition=...) instead of save_if().

Are there enough use cases that are not solvable by overriding the save() method or signals?

As it stands today, the use cases I've described so far require writing raw SQL updates, manually handling batching of related objects, and manually firing signals. That's what I'd like to avoid. Note that I'm not arguing these are majority scenarios, just scenarios that could have better support and shouldn't result in resorting to raw sql overrides of save(), delete(), etc. The only reason for this is simply the need to work with the cursor object directly to get the update count.

It is a good idea to return models.UPDATED or models.INSERTED from save(),

With a precondition, save() could return models.NO_ACTION as well (understanding that you're just giving examples/pseudocode).

returning models.DELETED or models.NO_ACTION from delete() and modified object counts from qs.update(), qs.delete(). Row counts instead of object counts from qs.update() and qs.delete() is a bit harder problem. The object count would not include related models deleted or updated. The return values from save() and delete() are just examples.

Related models are interesting because we first have to devise a clear way to specify the relevant conditions on those objects. I guess that's what you're assuming with your OptimisticLock field?

Also, relying on catching the exceptions from the save is just plain wrong (it results in transaction commit all too easily).

I don't follow this statement. Catching exceptions results in commits? Do you mean that the programmer might not think about whether or not they want to proceed with other operations or rollback a transaction on condition failure? This does highlight the need to ensure that this feature's interaction with transaction management is clear.

Yes, I mean the programmer error. If the only indication of concurrent modification is the exception from the save, it is all too easy to do the following code:

As you said how to handle transactions is important in this case. This is not only about transactions, even if obj2.save_if() would return NOT_SAVED, the same problem is still there. It also seems returning NOT_SAVED instead of an exception is a bad thing. The return value is all too easy to miss...

A conditional save would be handy. But it is not trivial to implement. The implementation should be one that handles derived classes, does not easily lead to half-updates (the condition was ok for the first table in derived save, but not for the second) and does not lead to too much code duplication.

I think the answer there is to check the precondition first and cut off the whole operation if it fails. You give the precondition to the immediate object. Even if we devise a way to specify preconditions for the cascaded updates of related objects, those updates would have the opportunity to rollback the transaction. I agree on code duplication. My goal is that you can generally use a provided ModelConcurrencySafe.save() and provide simple precondition expressions.

But I still have the feeling that save() is too late for this check. Data validation is the right time. save() should at most guard against concurrent modification, but relying on save()'s guard in program logic seems problematic.

Are there enough use cases that are not solvable by overriding the save() method or signals?

As it stands today, the use cases I've described so far require writing raw SQL updates, manually handling batching of related objects, and manually firing signals. That's what I'd like to avoid. Note that I'm not arguing these are majority scenarios, just scenarios that could have better support and shouldn't result in resorting to raw sql overrides of save(), delete(), etc. The only reason for this is simply the need to work with the cursor object directly to get the update count.

It is a good idea to return models.UPDATED or models.INSERTED from save(),

With a precondition, save() could return models.NO_ACTION as well (understanding that you're just giving examples/pseudocode).

This seems dangerous. Raising an exception is better, as the return condition is too easy to miss.

By the way, I think I have a cleaner approach to OptimisticLockField, that is using the pre_save() hook of the field to do the guarding and .validta() to do the validation (so forms would "just work"). No signals! :) And writing such a field should be easy. If one wants more complex logic, then model validation and overriding save and delete should work. Using abstract superclass should make this relatively straightforward. If I have time I will write a new implementation.

As you said how to handle transactions is important in this case. This is not only about transactions, even if obj2.save_if() would return NOT_SAVED, the same problem is still there. It also seems returning NOT_SAVED instead of an exception is a bad thing. The return value is all too easy to miss...

So we could add some transaction control options (rollback_on_failed=True) to help with this.

But I still have the feeling that save() is too late for this check. Data validation is the right time. save() should at most guard against concurrent modification, but relying on save()'s guard in program logic seems problematic.

Now we're coming down to the potential misunderstanding. precondition is not just a python expression that I'm evaluating in memory (otherwise, yeah, I'd do that and wouldn't have this ticket). No, the precondition is potentially a WHERE clause on the update that may lead to rows modified being zero. This is the only way for the current process to know *right at the time of save* that it's not in conflict. Save can't be too late, because the whole point of this ticket is to avoid race conditions and handle them more intelligently. Any time you leave between the check and the DB operation is time two processes can enter the same region of code and lose the benefit of the check.

Are there enough use cases that are not solvable by overriding the save() method or signals?

As it stands today, the use cases I've described so far require writing raw SQL updates, manually handling batching of related objects, and manually firing signals. That's what I'd like to avoid. Note that I'm not arguing these are majority scenarios, just scenarios that could have better support and shouldn't result in resorting to raw sql overrides of save(), delete(), etc. The only reason for this is simply the need to work with the cursor object directly to get the update count.

It is a good idea to return models.UPDATED or models.INSERTED from save(),

With a precondition, save() could return models.NO_ACTION as well (understanding that you're just giving examples/pseudocode).

This seems dangerous. Raising an exception is better, as the return condition is too easy to miss.

If the programmer has elected to use our new Model subclass then it's a pretty good bet they've read the docs and know WHY they didn't just inherit from Model. This isn't a gotcha we're throwing into Model, the person using "ModelConcurrencySafe" (or whatever we call it) didn't do so by accident.

However, there's still the option that we provide transaction control parameters so that ModelConcurrencySafe.save() does the right (e.g., rollback) thing before returning control to the calling code.

I am now convinced that having model.save() do update with WHERE condition is a good idea. I now see that this is analogous to what happens when there is a unique constraint violation. If I understand the code correctly, the normal procedure for uniqueness checks is:

model validation checks that there is no violation. No FOR UPDATE locking is done.

If there is a concurrent transaction that will cause a unique violation, the database will see the error due to unique constraint violation. Exception from .save() is the result.

When using PostgreSQL, if you are inside a transaction, the transaction will be set to aborted state (the abortion is done in PostgreSQL, not in Python code). You can only rollback the transaction, or rollback to savepoint. If you are outside a transaction, a half-update is possible.

I think this is exactly what should be done in optimistic locking. Also the part about transaction abortion is what should be done, only this time in Python code. If there is a ConcurrentModificationException, then the transaction can not be committed, it can only be rolled back to a previous savepoint or rolled back completely. No queries are possible. I think this is the safest behavior. Rolling back the transaction automatically is surprising, and allowing commit of the transaction without any action seems dangerous. The user can wrap the save in savepoint if he so wants.

The next big question is if there is a nice way to implement this. In my opinion the problems are the following:

There must not be duplication of save_base. save_base is not the simplest piece of code, and it is changed somewhat often.

How to indicate from save_base what happened.

How to pass the WHERE conditions to the right updates / deletes. What is the semantics of that condition.

Not complicating save_base too much.

Bonus points if this can be done without complicating base save. One idea is based on the fact that all the queries are done using _base_manager. Maybe overriding that is the correct solution? save_base should pass the instance to the _base_manager._update() and _delete(). Overridden _delete and _update can then do whatever checking needed in the update WHERE condition. I think this approach is worth some investigation.

Quick look suggests that update seems to be possible using the _base_manager approach. But related object updates and deletes and also the base object deletion are done using batch_delete and batch_update from django.db.models.sql.subqueries. These seem harder to fix. Using optimistic locking in related object batch_update and batch_delete seems to be the hardest problem. Maybe that doesn't need to be possible for this feature to be useful.

Quick look suggests that update seems to be possible using the _base_manager approach. But related object updates and deletes and also the base object deletion are done using batch_delete and batch_update from django.db.models.sql.subqueries. These seem harder to fix. Using optimistic locking in related object batch_update and batch_delete seems to be the hardest problem. Maybe that doesn't need to be possible for this feature to be useful.

That's the idea I was initially going with, but I'd be willing to investigate it while hacking on the main feature. Are we at a point where it would be worthwhile for me to get started on this and work towards patches to submit?

I think this is a good idea, but you still need to convince a core developer that this is a good idea :) A working and clean patch is a good way to do so.

Maybe the sanest approach is to modify save_base and delete with appropriate hooks so that optimistic locking can be implemented as an external/contrib module. I do still think that duplicating save_base is a bad idea. Return values from qs.update, qs.delete, model.save and model.delete are probably easier to get included.

Sorry if I have made this more complicated than this really should be, I tend to get a little bit carried away sometimes... Anyways, +1 from me to modified save_base.

The model could provide two basic way to deal with concurrency which could be extensible:

object.save(precondition=..., concurrencyhandler=MERGE_SILENTLY)

or

object.save(precondition=..., concurrencyhandler=RAISE_ERROR)

The first way would try to merge silently and raise an error if it cannot and the other would raise an error even if there's no field level conflicts.

Of course I think the default way would be RAISE_ERROR since merging silently could possibly lead to unexpected results in some cases.

Merge is an interesting idea, but it's definitely not the first part of this I would try to implement :) It's complicated because:

Some models may have fields that have some relation to another and shouldn't be independently merged. I suppose for this problem there could be a fields merge blacklist, or sets of fields that need to be updated together.

Does this expand to N objects being involved in a concurrent update (i.e., more than two simultaneous updates of the same object)?

Maybe it's not *so* bad, but it does seem to present some interesting challenges.

Dropping in to say that I intend to work up some patches in the coming weeks. Hopefully sooner, but making that promise wouldn't be a good idea. I do this understanding we still need design decision, so I'll do so focusing on demonstrating desired functionality discussed in this ticket, and on the mailing list thread.

I created a patch for changing the logic of .save() to do an update, check if anything was updated, and if not to do an insert. And while doing that I ended doing a bit more, for example save_base return UPDATED or INSERTED now. It is in ticket #16649 and could be of interest when creating a patch for this ticket.

That looks higher level than what I was going for. That works when all you'll have is users trying to avoid trampling each others edits, but consider:

That mechanism leaves data open to be locked indefinitely if a person starts an edit and leaves the browser(tab) to do something else. I don't see this kind of thing often on the web as it's kind of intrusive to others who expect to be able to edit data.

Some will want the choice to not pre-lock be more reactive to problems. This is the model taken by bugzilla from one of my original examples.

This is either awkward or doesn't work at all if your site provides APIs for managing any of the same data that can be edited by users on the site. What does the API do while the data is locked, return 409 until it's unlocked? API consumers now have to build retry strategies around this? (hope that consumer wasn't trying to fulfill a page request of their own...)

I'm hoping to build something a little more basic, with which you could support the reactive models. Obviously that project would be a great choice for CMS-centric frameworks (fein, djangocms, etc.) and sites.

There's sort of too much going on here for a single ticket to track easily. Can someone split off the delete/update-should-return-number-of-rows part into a separate ticket (that's an obvious fix), keeping this ticket for discussing the save-if stuff?

I'm marking this "someday" because it's a big enough feature that whether we accept it really depends on the implementation. If the implementation is compelling enough it'll go in, but it's a pretty serious idea and I can't accept or reject it without a good look at the implementation. estebistec, I'll try to keep an eye on what you're doing on on GitHub, but if you get to a point that you'd like some feedback a thread on django-dev is going to be your best bet.

There's sort of too much going on here for a single ticket to track easily. Can someone split off the delete/update-should-return-number-of-rows part into a separate ticket (that's an obvious fix), keeping this ticket for discussing the save-if stuff?

Agreed. I can create the new ticket for num-rows part. Is there a way to have tickets depend on each other in track (i.e., have the num-rows ticket block this one once I create it?)

I'm marking this "someday" because it's a big enough feature that whether we accept it really depends on the implementation. If the implementation is compelling enough it'll go in, but it's a pretty serious idea and I can't accept or reject it without a good look at the implementation. estebistec, I'll try to keep an eye on what you're doing on on GitHub, but if you get to a point that you'd like some feedback a thread on django-dev is going to be your best bet.

There's sort of too much going on here for a single ticket to track easily. Can someone split off the delete/update-should-return-number-of-rows part into a separate ticket (that's an obvious fix), keeping this ticket for discussing the save-if stuff?

Agreed. I can create the new ticket for num-rows part. Is there a way to have tickets depend on each other in track (i.e., have the num-rows ticket block this one once I create it?)

In Django models, modified-rows count would help in handling optimistic concurrency control problems →
In Django models, save/delete preconditions would help in handling optimistic concurrency control problems

Updating summary to be more specific now that ticket 16891 directly confronts the rows-modified aspect of this.