If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Commit for every 'n' records in 'insert... select from' .

Hi

Our application has about 3000 tables. This application has a concept called 'company'. Each application company can be related to a physical site of the company. For example, if we have two sites one at London and one at Florida then we have two sets of companies. Hence our application would have 3000 + 3000 = 6000 tables.

Each table name will have the company attached at the end. like 'payroll422', 'payroll522' represent same tables pointing to different sets of companies 422, 522. If a user in London enter data it goes into 'paroll422' and if a user in Florida enters data it goes into 'payroll522' table.

Once in very month I have to populate the data of this companies into 'test' companies for the purpose of testing the new pathes of application. The test database is in a different machine, which is having very less hp (memory and CPU is less). I can create a new set of test companies say 423, 523 in test environment. My application easily creates the 3000 tables and indexes needed for new companies.

Now my job is to populate the data from production companies 422, 522 into 423, 523. I have created database_links between the PROD and TEST database. I have created a small script for replication/copy of data as follows

This script is working OK. It populates data from PROD tables in to TEST tables with one exception. Some of your tables
like 'general_leger422 has abut 3+ million records. The above statement fails (basically it fills out the entire redolog tablespace) if the number of records are huge. It is doing this because I am commiting just once for entire table. So for tables with 3+ million records, I am not commiting untill all records (3+ million) are filled. Redolog is filled and it hangs.

So my question is , is it possible to commit for every 1000 records in the above 'insert into ... select from' statemtn. If so how do I specify this.

Or if there is any other way to accomplish this replication/copy please suggest.

REM =============================================
REM Set timing on and commit after all rows have
REM been inserted.
REM Note the time it takes to complete
REM =============================================

i) this is a data transfer to a different machine, with network overhead included as well
ii) it's only done once per month
iii) the target is a test database -- I'm guessing it's not sized as a production system, nor is the increased load due to frequent committing going to affect anyone else.

So although you are quite, this may be a circumstance where it would be an acceptable sacrifice.

By the way football, one of the side effects of using the COPY command in this way is that you can put an ORDER BY on the select statement, and if your COPY fails on a big data transfer you can determine where it failed (by querying the target table) and start off the copy again half way through by putting a condition on the SELECT. This works best when you ORDER by a very selective column, and can get Oracle to use an index to get the rows in the correct order -- you don't want to be doing large sorts on your production database, I'm thinking

Originally posted by slimdave What a terrible way to design a database.

I agree with SlimDave on this. Have you thought about changing your schema to use just one set of tables, partition them based on company, and then use Oracle row level security to control who can see what company.

You might also look into SQL Loader for moving the data. You could transform the data as you take it out of the database, compress the file and then ftp it to your test system.

Re: Commit for every 'n' records in 'insert... select from' .

Something does not make sence here. You said:

Originally posted by football This script is working OK. It populates data from PROD tables in to TEST tables with one exception. Some of your tables
like 'general_leger422 has abut 3+ million records. The above statement fails (basically it fills out the entire redolog tablespace) if the number of records are huge.

What exactly do you mean by "redolog tablespace"? No such thing exists in Oracle. Are you talking about *undo* (rollback) tablespace? If yes, then your thoughts about more frequent commit makes some sence.

BUT! What realy confused me is your next remark:

So for tables with 3+ million records, I am not commiting untill all records (3+ million) are filled. Redolog is filled and it hangs.

Hangs? Then you are not talking about undo tablespace being filled, as this would not hang, it will simply throw out an error and exit. If the thing realy simply "hangs", than it sounds to me like your archlog destination realy becomes full. If it realy is so, then there are other options, like using UNRECOVERABLE in CTAS. And BTW, in this situation executing more frequent commits will not help in any way - number of commits don't have any influence in the ammount of redo generated.

So which one is it? Does your undo tablespace fills up or is it your archlog destination that becomes full and freezes further database operations?

Last edited by jmodic; 11-06-2002 at 09:19 AM.

Jurij ModicASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?

Here I am...

I am going to try SLIMDAVE's recommendation to use copy command. I looked into the SQL*Plus document about this command. It seems to suite my requirement.

Well, My (ERP)application creates the tables with company number attached to the tables' names. There is nothing much I can do about it.

As STMONTGO pointed out the regular/frequent commits may affect performance. But I don't see any other choice.

JMODIC, Well, I was wrong when I said redolog tablespace. I meant rollback tablespace. We do switch off archive logging while doing this copy. So there is no question of archive logs filling up the disk space. The situation here is the rollback tablespace fills up and it really hangs (at the application side) and when I see the alertlog there is a message/error "

Re: Re: Here I am...

Originally posted by chrisrlong BTW, considering what we do know about this model, I shudder to think about why they might actually have 3000! tables.

Because Oracle can only have 1000 columns and they needed:
PAY_DATE_2201, PAY_AMOUNT_2201, PAY_DATE_2202, PAY_AMOUNT_2202, PAY_DATE_2203, PAY_AMOUNT_2203, PAY_DATE_2204, PAY_AMOUNT_2204, etc. to implement multiple pay_dates and pay_amounts...

Re: Re: Re: Here I am...

Originally posted by marist89 Because Oracle can only have 1000 columns and they needed:
PAY_DATE_2201, PAY_AMOUNT_2201, PAY_DATE_2202, PAY_AMOUNT_2202, PAY_DATE_2203, PAY_AMOUNT_2203, PAY_DATE_2204, PAY_AMOUNT_2204, etc. to implement multiple pay_dates and pay_amounts...

I agree a fully denormalized schema is the mark of every good programmer.