That’s a particularly interesting observation. WHich version was that, and have you seen anything that looks like it might be a patch related to the lock time ?
The manuals suggest running the routine to resyncronize more than once, of course, but I guess you probably did that anyway.For example:

yes, undo and redo was huge problem. And the final journal apply was locking table not for minutes, but for few hours (the table was quite haeavily used most of day). However, I have never tried DBMS_REDEFINITION in 11R2 so I don’t know wether things have changed since last try.

I would guess that the volume of undo and redo generated was a problem, not to mention the work done in creating new indexes. Nologging operations could help, of course, so long as you backup tablespaces before you do the final switch. Then there’s always the question of how much time it takes for the final bit of journalling to be applied – I’ve seen a couple of reports of the tables being locked for a couple of minutes at that step (though I’ve not been able to reproduce the problem, I think it’s only one that’s likely to happen at large volumes and high throughput.)

Thanks for the comment – it’s one I should have raised myself (and one I highlighted, I think in Practical Oracle 8i) that the exchange may be faster with unique (or PK) constraints in place if you choose the without validation option.

For those who never bought a copy, here’s a link to an article on my old website explaining the anomaly. (The link is to a page introducing a Word document, and contains an addedum to that document.)

we have used that approach several times and it worked. The only problem is that the ALTER TABLE EXECHANGE (SUB)PARTITION still does not work with primary key/unique constraints in sufficient manner (one have to use WITH VALIADATION SYNTAX when PK takes place). It is quite unbelievable, since the problem was not fixed by Oracle Development since Oracle 8.
So for such a large table (2TB), the process could take some time. But I would still favour it instead of DBMS_REDEFINITION.

Regards
Pavol

]]>By: Pavol Babelhttps://jonathanlewis.wordpress.com/2011/09/30/table-rebuilds-2/#comment-41908
Sun, 02 Oct 2011 11:39:12 +0000http://jonathanlewis.wordpress.com/?p=7311#comment-41908for table with 2TB? We tried to use DBMS_REDEFINITION for large data (cca 500GB) and it wrecked…
]]>By: Raohttps://jonathanlewis.wordpress.com/2011/09/30/table-rebuilds-2/#comment-41899
Fri, 30 Sep 2011 17:15:25 +0000http://jonathanlewis.wordpress.com/?p=7311#comment-41899Thanks Mr Lewis, I am goin to try that approach. step by step. I think that may work
Regards,
RP
]]>By: Jonathan Lewishttps://jonathanlewis.wordpress.com/2011/09/30/table-rebuilds-2/#comment-41896
Fri, 30 Sep 2011 12:51:53 +0000http://jonathanlewis.wordpress.com/?p=7311#comment-41896Flado,
That’s not a nitpick – thanks for pointing out the error.
]]>By: Fladohttps://jonathanlewis.wordpress.com/2011/09/30/table-rebuilds-2/#comment-41895
Fri, 30 Sep 2011 12:32:03 +0000http://jonathanlewis.wordpress.com/?p=7311#comment-41895Sorry about the nitpick, but I got a bit confused by “the former” and “the latter” cases – I think they need to swap places. “The former” (the first-mentioned) case is the one where “new partitions should be hash partitioned”, and that’s what your process appears to accomplish, unless I got irreparably confused.
Cheers,
Flado
]]>