I understand that nobody can debug this from where you are, but I was wondering if anybody has seen anything like this or if they have any insights into what could be going on. I just did a 112G file migration of production data using oracle_datapump so I know this works in principle. When I tried it on my test instance I am seeing stuff like this

Does anybody have any idea why it could be taking 1800 seconds to select one record from a not very big table? File corruption? Disc fragmentation? Oracle instance configuration? I'll take all guesses and conjectures. Could it be something that I've done wrong?

I did that, but I don't see anything. import.txt is the output of tkprof. The raw trace is too big to post. The actual SELECT on the external table is at the end of import.txt. Search on ACCOUNT_ORA_GTT. (It's a different table than the one I posted, but same problem.) It just shows a huge elapsed time and nothing else. I ran the trace using ALTER SESSION SET EVENTS='10046 trace name context forever, level 12';, which I assumed would show me everything I might need. I just can't see anything interesting, which is not to say it's not there. I just don't know what to look for.

It is the actual content of the raw trace file which holds the answer.
The DB innards are doing something exceedingly repatative & the trace file can be dissected to answer where time is actually being spent.

Lalit Kumar BMessages: 3122Registered: May 2013 Location: World Wide on the Web

Senior Member

scottwmackey wrote on Sun, 11 August 2013 01:57

I did that, but I don't see anything. import.txt is the output of tkprof. The raw trace is too big to post. The actual SELECT on the external table is at the end of import.txt. Search on ACCOUNT_ORA_GTT. (It's a different table than the one I posted, but same problem.) It just shows a huge elapsed time and nothing else. I ran the trace using ALTER SESSION SET EVENTS='10046 trace name context forever, level 12';, which I assumed would show me everything I might need. I just can't see anything interesting, which is not to say it's not there. I just don't know what to look for.

You don't have to look for anything further, before you clarify something. Looking at the tkprof output, I see the user being used is SYS.

That explains that you are working on user SYS. If you connect as SYSDBA, your schema name will appear to be SYS. Question is why do you need to log in as sys?
SYS or SYSTEM are Oracle's internal data dictionary accounts and not for general use.

What is the time if you do it with Data Pump instead of external table?
But, as far as I can see, you did not activate the trace with wait events (or you exclude them in your tkprof report).
Use DBMS_SESSION/DBMS_MONITOR.SESSION_TRACE_ENABLE to activate it with wait events:

I then deleted every block of this type that followed the same type of block. I posted the result, which I hope would be just the relevant raw trace. If there are any expert trace file readers out there, I would love to know what you see. It looks to me that it registers my SELECT at 2013-08-10 15:32:31.332, maybe even fetches the record, then just goes back to sleep and wakes up at 2013-08-10 16:02:39.081 and gives my data. This makes no sense to me.

John WatsonMessages: 6581Registered: January 2010 Location: Global Village

Senior Member

I think your problem may be in the nature of a Data Pump dump: the table that describes the dump and its contents is included at the end. So the entire file must be read, in effect your whole table is populated before applying the ROWNUM predicate. I am of course open to correction on this.

You get a related problem with combining export and import: with the old exp/imp utilities, you can pipe the results of an export to an import, and both run in parallel. You can't do that with expdp/impdp, you must let the export run to completion before you can import (yes, I do know about network mode, which I think bypasses the limitation by transferring the job control table first.)

I don't think your assumptions about the nature of oracle_datapump are correct. That would mean the engineers at Oracle are pretty incompetent, which I am pretty sure they are not. Second, even if you were correct, wouldn't you think it a bit odd to take 1800 seconds to read a 15M file? That would be right around 8K/s. Just for the sake of verifying my sanity, I ran a couple of tests. First, same file, same directory, different Oracle instance on a different server on a different domain (I think).

It looks to be something strange between the Oracle instance and the /data directory that is mounted to it. There is still something I am not understanding though because copying the file took less than a second, so I would think it can't be a problem with the server being able to read from the disc.

If I didn't know better, I would say there is something that is telling Oracle just wait 30 minutes before reading the file. I can copy the entire 2G file to the server's drive, point the external table to that file and load in about 11 minutes. That just makes no sense.

First things first, I realize I made a mistake in the test I just posted. I am selecting from the same table for both of them. For the record, I have run several of these tests and the results are always consistent. I reran the test with the correct _10 external table and the results are even more interesting. This is what happens when I call the real 10 line file