Deletes are applied to the
consolidated database after all
inserts and updates are applied. When
deletes are applied, tables are
processed in the opposite order from
the way they appear in the upload.
When a row being deleted references a
row in another table that is also
being deleted, this order of
operations ensures that the
referencing row is deleted before the
referenced row is deleted.

Problem:

We are constantly having this scenario where our customers deletes a row from the remote database and reinserts the row where the value of an unique index is identical. Since the primary key value is an autoincrement the remote database will not treat this as a row update then uploading the data to the consolidated database. We have tried to avoid this by setting the synchronization intervals as low as 1 minute but we still have this recurring problem and since there are 15 databases syncrhonizing each minute I don't think we can set a tighter schedule.

The remote database uses a log-based synchronization subscription.

Simple repro

A synchronized table, InsertThenDelete, having two columns; aPrimaryKey(INT, PRIMARYKEY) and someUniqueData(VARCHAR, UNIQUE).

Any tips for how I may change this ? I really need that unique index to be there and I cannot expect the users to know when the data are syncrhonized so they can time a delete and insert to be performed in two syncrhronization batches.

We have used a similar technique to the one Breck has suggested to "delay the insert" after all deletes are processed. Another method would be to switch to transactional uploads (dblmsync -tu):

When you use -tu, the order of transactions on the remote database is always preserved on the consolidated database.

Note however that this regards all changes sent to the consolidated, and that MobiLink still coalesces all changes to one single row within one transaction to only one insert/update/delete, so you might use small transactions on the client, too. So in your sample, you would have to add a COMMIT between the DELETE and the INSERT (or use AUTO COMMIT mode).

Please be sure to test the performance impact when switching to transaaction uploads. When using transaction uploads, each COMMIT in the transaciton log results in a separate synchronization. Since you've already said you are synchronizing each minute, you're probably OK, but I'd be wary of using -tu on a remote database that had not synhronized in a week and had 1000 commits to process.

Your sample uses the "re-use" of a primary key. That's a no-go in a key-based replication/synchronization setup - but I'm sure you don't use that in your real data model.

If you have to delay modifications on UNIQUE keys (i.e. the users can delete a row and insert a new row with a different PK but the same UNIQUE KEY), and there are other tables referencing the new row, then, you might need to delay these inserts as well, as long as you can not delay the FK check itself. With a SQL Anywhere consolidated, you can delay the FK check by setting the wait_for_commit option or by using the CHECK ON COMMIT clause of the according FK definition.

That would work since when not using transactional uploads, the whole upstream is handled as one big transaction, so the commit would only happen after all operations are applied.

I understand that schema changes are non-trivial, but in the long term, I'd suggest making the unqiue index you current have on the table the primry key and removing the auto-increment primary key, as long as you never update the unique index column.

For obvious reasons, I can't tell about Oskar Emil's requirements, however I do think there are good reasons to use unique keys (with a "real world meaning", visible to users) and still use surrogate keys as PKs on the same row.

A very simply sample would be a common lookup table with a numeric PK and a further unique "sort number" that is used to enforce a particular numbering of the lookup values. In such a situation, when adding a new lookup value, one would sometimes need to adapt the sort numbers of existing entries. That might lead to ML problems as well (although here because of the order of inserts/updates, not of deletes).

Not to tell about UNIQUE KEYs that are used to mark a combination of two columns or a relationship as UNIQUE - for cases in which these combinations may change in future (whereas the PKs mustn't change)...

I first started working with databases 6 years ago and in that time I have already learned to avoid user-defined primary keys at all cost. They say it is a static value but it is only a matter of time when they want to change that value.

Those values have business logic attached to them and healthy businesses change over time while the underlying system that handles the economy remains the same.

That is why I think primary keys should always, if possible, consist of either an autoincremented number or a guid hidden to users.