How to back up a table properly before installation of an online data-fix?

Do you give direct read-write access to the production databases for the functional support and maintenance teams? We try not to and one of the reasons is to test every change before it’s deployed on production. This in turn means that DBAs are the ones who apply all the changes including small data-fixes. We have some procedures on how the data-fix changes have to be delivered to the DBAs for installation (scripts along with at least approximate information on how many rows should be changed, as well as some evidence that the script has been tested on pre-prod database).

Last week we got a request to install a data-fix (replace some values of one column in a particular table ) on a running production database and create a backup of the table before doing the changes. It seems a very simple approach can be used:

Create a backup of the table using CTAS statement (create table as select)

Run the provided update statement

commit

…but if you think about that, it’s not so straight forward afterall…

You probably guessed already that the problem with the above sequence is such, that there is a chance that the data in the table are changed while the CTAS is running (the chances increase if the table is bigger in size, read consistency ensures that all data are read as of the same SCN when the statement was issued, therefore the changes to the data after beginning of the CTAS statement will not be included in the backup table).

Not a big deal, right? Let’s lock the source table before the beginning of the CTAS, so we get the sequence:

lock the table in exclusive mode

Create a backup of the table using CTAS statement (create table as select…)

Run the provided update statement

commit

Unfortunately this is not good enough, create table ... is a DDL statement, and DDLs do commit implicitly, therefore the chances remain the source table is updated right after completion of the CTAS but before the update statement (making the backup unreliable again).

I came up with the following solution for this problem (let’s say t1_source is the table that has to be changed):

First create an empty backup table, this is the only DDL that will be used, therefore we don’t have to worry about the implicit commits anymore:

SQL>createtablet1_backupasselect*fromt1_sourcewhere1=0;Tablecreated.

Lock the source changes to prevent any changes to the data while we create the backup

Commit the changes (which also releases the exclusive lock on the source table)

SQL>commit;Commitcomplete.

This sequence makes sure we have an exact copy of the table that’s taken just before the changes were installed.

I realize this could be quite a painful procedure if the tables are huge (locking the table that’s actively used for a long time is not a good idea during a busy working day, also lot of space is needed to create a copy of the table). So how would you perform the task? Do you see another more efficient but still as reliable way to do that compared to one I posted above?