I have imported data into database using sqlloader into flat table. Now I need to move the data from this table to another table. This is production system and I must keep it online. So I decided to make script that will move data in small chunks and commit frequently to avoid waits and table locks.
Regarding the script I have question. I can to the bulk load of rowids. Is it possible to optimize the insert and delete in similar way instead of doing insert/delete in loop for each rowid ?

FORALL applies to only 1 statement not 2.
So you have to do 2 FORALL "loops".

Anyway, I maintain that you should use DBMS_PARALLEL_EXECUTE package as it is built for this and support the cases where some errors raise can restart where it is stopped, without speaking that it can do it in parallel if you want.

inserting data does not generate any table locks except the inability to modify the table structure. Inserts generate very little redo. Your users can't access the new data until it is committed. just do

> Inserts generate very little redo.
above is NOT correct.
The whole INSERT statement including all values for the new row must exist in the REDO
to UNDO the INSERT only the ROWID is required to be deleted.

With regard to DML & REDO below is the list from largest to smallest
INSERT
UPDATE
DELETE

With regard to DML & UNDO below is the list from largest to smallest
DELETE
UPDATE
INSERT