If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Unanswered: Insert Into .. Select ..

Hi,

I'm trying to copy a few huge tables. I cant do export - import due to some restriction. So, i chose to use the Insert into.. Select command. But the problem is, since the table is huge, huge rollback segments are generated.

Is it possible to issue a COMMIT between INSERT INTO <tab1> SELECT FROM <tab2> command to avoid this?

-----------------
For the same problem, i created a huge rollback segment and issued a commit at the end. and it worked. I'm using optimal parameter for rollback segment, but it did not shrink after this huge insert. So, when i try to copy the second table, instead of releasing and using the same rollback segments, rollback segments started to grow again.

Take a look at the data, and see how it may group out. Maybe by date, month year, transaction number, user names, you may even have to do subst(<field>,1,1) to get smaller data groupings. Do a grouping queries with counts (example below).
------------------------------------------
select FISCAL_YEAR, COUNT(FISCAL_YEAR) AS ROW_COUNT
FROM jrnl_header
GROUP BY FISCAL_YEAR;

And remember there is no shame in doing smaller statements if need be such as:
---------------------------
select FISCAL_YEAR, ACCOUNTING_PERIOD, COUNT(FISCAL_YEAR) AS ROW_COUNT
FROM jrnl_header
GROUP BY FISCAL_YEAR, ACCOUNTING_PERIOD;

Once you figure out how to break out your data, do multiple
INSERT INTO statements.

In the case above it would be:
------------------------

INSERT INTO New_jrnl_header SELECT * FROM jrnl_header
WHERE fiscal_year < 2000;
COMMIT;
INSERT INTO New_jrnl_header SELECT * FROM jrnl_header
WHERE fiscal_year = 2000;
COMMIT;
INSERT INTO New_jrnl_header SELECT * FROM jrnl_header
WHERE fiscal_year = 2001;
COMMIT;
----------------------
and repeat as needed. You may want to keep this exercise in mind when you decide to start partitioning tables. The same thing for grouping data is needed when you look at doing partitions.

As far as the rolback segments, I ran into the same problem when I went to build my partitioned tables. I found the easiest method is just to recreate them when I was done.
----------------------------
ALTER ROLLBACK SEGMENT "R08" OFFLINE;
DROP ROLLBACK SEGMENT "R08";
CREATE ROLLBACK SEGMENT "R08" TABLESPACE "PSRBS" STORAGE ( INITIAL 1M NEXT 1M OPTIMAL 2M MINEXTENTS 2 MAXEXTENTS 50);
ALTER ROLLBACK SEGMENT "R08" ONLINE;

ALTER ROLLBACK SEGMENT "RBSSUPERBIG" OFFLINE;
DROP ROLLBACK SEGMENT "RBSSUPERBIG";
CREATE ROLLBACK SEGMENT "RBSSUPERBIG" TABLESPACE "PSRBS" STORAGE ( INITIAL 50M NEXT 50M OPTIMAL 200M MAXEXTENTS 10);
ALTER ROLLBACK SEGMENT "RBSSUPERBIG" ONLINE;
-------------------------------
And since I was usually late night doing this, I would alter the smaller rollback segments offline, so that I didn't run into the maxextents problem, if a query was bigger.

I know this probably is not the most efficient way of doing it, but as your database grows, you will have to look at these issues at some point with partitioning. To give you an idea of what partitioning can do for you, try this. When I broke a 7,787,268 row table up into monthly chunks the access time to individual rows wen from 10 to 70 msec. But the selection of rows over a two month period went from 1000+msec to <150msec.