Oracle – for when it was like that when you got there

Main menu

Tag Archives: remap_schema

I’m determined to adopt a positive mental attitude this year.
When the train company explains delays by saying we have the wrong kind of sunshine, I prefer to marvel at the fact that the sun is shining at all in the depths of an English Winter. Let’s face it, it’s a rare enough phenomenon in the summer.
The slow-running of the train caused by this rare natural phenomenon also gives me more time to write this post.
There’s more “good” news – Datapump Import tends to be rather slow when it comes to applying optimizer statistics.
This is because it insists on doing it one row at a time.
All of which provides us with an opportunity to work from home optimize our import job… by not bothering importing the stats.
“Hang on”, you’re thinking, “won’t that mean you have to re-gather stats after the import, which probably won’t be that quick either ?”

So, the World Cup is in full swing.
Now the lesser teams have fallen by the wayside ( England), we can get on with enjoying a feast of footie.
As well as a glut of goals, the current tournament has given us a salutory reminder of the importance of diet for elite athletes.
After predictably (and brilliantly) destroying England single-handedly, Luis Suaraz found himself a bit peckish and nipped out for an Italian. Now the whole world seems to be commenting on his eating habits.
Like Luis, you may find yourself thinking that you’ve bitten off more than you can chew when confronted by DBMS_DATAPUMP.

The documentation does offer some help…to an extent. However, the whole thing can seem a bit fiddly, especially if you’re used to the more traditional command-line interface for Datapump.

What follows is a tour through DBMS_DATAPUMP based on my own (sometimes painful) experience, broken down into bite-sized chunks.
Much of the functionality to filter object types and even data is common to both Exports and Imports.
So, the approach I’ve taken is to cover the Export first, with a view to finally producing a Full Database export.
I’ve then used the Import process against this to demonstrate some of the package’s filtering capabilities.
So, what’s on the menu today ?

Privileges required to run a DBMS_DATAPUMP job from your current schema and for the whole database

Running a consistent export

Running datapump jobs in the background

Monitoring running jobs

Importing from one schema to another

Specifying the types of objects to include in Exports and Imports

Specifying subsets of data

DDL only Jobs

How to Kill a Datapump Job

The full code examples have all been written and tested on Oracle XE 11gR2.
I’ve tried to maximise the use of in-line hard-coded values and minimise the number of variables in an attempt to make the code easier to follow.
Also, in these examples I’ve made use of the default DATA_PUMP_DIR directory object, but you can use any directory object to which you have the appropriate privileges.

For dessert, there are a couple of other DBMS_DATAPUMP features that I have found useful that are specific to Enterprise Edition ( in one case, with the Partitioning Option) ;