This is a question dealing with the differences of a relaod.sql script used to upgrade a SQL Remote database in contrast to re-extract a SQL Remote database.

A verbose introduction (DBEXPLAIN -v):

We are about to migrate our SQL Remote databases from ASA 8.0.3 to SA 12.0.0 and are just planning the customized unload/reload process. Our goal is to build a small tool that does this migration automatically, i.e. without having to re-extract remotes.

Prerequisites:

All remote databases share the same schema, only data is different - though table IDs and formatted procedure code may be different based on the original version that the database was built with. After the migration, these subtle (and semantically irrelevant) differences should have vanished.

Using UNLOAD -ar is not possible because we want to improve some schema properties (e.g. replace the "anonymous" table check constraints with named ones, one of my favourite v9 features).

In addition, we have a custom extraction utility that does the remote-specific data extraction and uses a parametrized reload script to build the data.

So we want to handle the migration similar to our custom extraction, i.e. would build a tool that does unload the data and uses a parametrized reload script to build the database.

These two tasks (extraction vs. migration of a remote database) seem quite similar with the following differences:

The migration needs to handle the translog offset accordingly, whereas a newly extracted database just has its "fresh" log.

The migration needs to take care of the contents of the SQL Remote tables (sysremoteuser and syssubscription) whereas a newly extracted database has not yet used these values.

So my question is:

Can I use basically the same parametrized reload.sql script I use to re-extract when I assure these two differences get handled accordingly?

This would include that I would not only parametrize the reload.sql script with obvious parameters like the remote user/file address and the GLOBAL_DATABASE_ID (as the re-extraction already does) but would also have to call sa_setremoteuser and sa_setsubscription with the according values taken from the v8 database. (Yes, I'm aware that these are undocumented functions but I'm sure they are built for this exact usage...)

[This is not a complete answer, just my test results so far. Handle with care...]

Besides the schema and the data from user tables, the following topics seem to need particular care (and will be handled correctly by using DBUNLOAD -ar, of course):

For each table with (GLOBAL) AUTOINCREMENT columns, the value from systabcol.max_identity should be preserved, i.e. queried in the old database and set in the new one by sa_reset_identity - cf. DBUNLOAD -l.

User IDs should be preserved, particularly the ones for remote users - this will allow that logs from the old database can still be used for SQL Remote. Can be set by GRANT CONNECT ... AT .

Publication IDs should be preserved - for the very same reason (cf. the method described in this answer from Bruce Hay).

The contents from the sys_remote_user SQL Remote table has to be preserved - by setting it in the new database with the sa_setremoteuser procedure.

The contents from sys_subscription SQL Remote table has to be preserved, too - by setting it in the new database with the sa_setsubscription procedure.

Just for the record - that old list has worked well for us when migrating SQL Remote databases to newer SQL Anywhere versions and a simple reload via dbunload -ar was not feasible (because we wanted to make use of newer features in the schema).

However, we preferred to have two similar customized reload.sql scripts (one for freshly extracted remotes, one for migrated remotes) since both scripts used different parameter sets... - but the main part could be kept identical.