Before presenting McDP, I will make some reminders about export and import in Oracle.

These programs allows to unload into files part of a database and reload these files into another or the same database, with or without modification on the objects.
Oracle has 2 engines to make this: the original one (exp/imp) and Data Pump (expdp/impdp).

The original mechanism is a client-only one which means that the client programs do all the work.
"exp" reads the objects definition and data from the database using SQL interface like any database application, transfers them across the network (if the client is not on the database server), converts them in a proprietary format and write them into one or several files.
"imp" reads these files, converts their content into SQL statements applying the desired transformation (only changing the owner is possible) and send these SQL statements to the database to create the objects and/or insert the data.

Data Pump is made of 2 parts: 1) a client one, the expdp and impdp programs, and 2) a server one, the Data Pump engine which resides inside the database and runs in its instance.
The client program allows you to start an export or import job which will be executed by the server part, the dump files are on the database server. Unlike the previous mechanism, if you interrupt the program, you do not abort the export/import, this one continues on the server part. More, using a Control-C (SIGINT), the client program shows a prompt allowing you to continue, stop or make other actions on the current job. You can then leave the client program, let him continue on the server and, after a while, execute a new expdp/impdp program attaching the previous job to see what it is currently doing.
Data Pump also allows to modify many and many objects parameters during the import process (see the help below).

Now back to McDP, the title is more the target than the current status, I focused on the "much more" to implement some features the current Data Pump client programs (expdp and impdp) have not.
So what does McDP not do (for the moment):
- it does not export
- it does not import
- it is not an interactive client, just a command line tool

Saying that you will ask but what does it do then? No export, no import, what is the relation with Data Pump?
The answer is that it is another Data Pump client in the same way than expdp and impdp but doing different things.
There they are:

List the current known Data Pump jobs with some of their parameters: owner, job name, operation (EXPORT, IMPORT...), mode (FULL, SCHEMA, TABLE), state, degree of parallelism, number of active sessions, number of attached sessions and, if you want, the command line and last message (LIST command)

Display where is a Data Pump job that is what it has already done and what it is currently doing (WHERE command)

Display the content of a Data Pump dump (CONTENT command)

Display information about a Data Pump dump file (DISPLAY command)

Follow a Data Pump job and display information as if you were the one who launched it (FOLLOW command)

Get the DDL of a set "objects" applying on them transformation (DDL command)

Get the DDL of one or more Oracle users with or without their privileges to help you recreate them (USER command)

Copy a schema or part of it into another or the same one and a remote or local database

Interested?
So here's the (quite long) help:

C:\>McDP -h
McDP Utility by Michel Cadot: Version 2016.08.25
Copyright (c) Michel Cadot, 2016. All rights reserved.
Usage 1: mcdp.pl { { -h | --help }
| [<logon>] <command> [<option> [...]] }
where "-h" displays the version and the usage help and <command> can be one of:
{ -a | --alter } [<schema>.]<dp job> [[-opt] <execution option>]
Modifies some execution parameters if the job.
{ -c | --content } [<directory>:]<file>[,...]
[[--]show] [{[--]keep|[--]drop}]
[[-opt] <filter/transform/display option>]
Gives the content of dump file(s) accordingly with given options.
{ -cmd | --command } [<schema>.]<dp job> [[-opt] <display option>]
Gives the Data Pump job command line.
{ -cp | --copy } <db link> [<source schema> [<target schema>]]
[[-opt] <filter/transform/display option>]
Copies a schema or part of it into another or same one.
{ -d | --display } [<schema>.]<dp job> [[-opt] <display option>]
Gives information about a Data Pump job.
{ -d | --display } {<directory>:<file>|<full path file>} [[-opt] <display option>]
Gives information about a dump file.
{ -ddl | --ddl }
[<directory>:]<file> <dblink> <type> [<object>[,<object>...]]
[[--]show] [{[--]keep|[--]drop}]
[[-opt] <filter/transform/display option>]
Retrieves the DDL for all <object> related to <type> Data Pump session accross
<dblink> applying the filter and/or transform options.
{ -f | --follow } [<schema>.]<dp job> [[-opt] <display option>]
Follows a Data Pump job displaying information as and when it runs.
{ -k | --kill } [<schema>.]<dp job>
[{[--]keep|[--]drop [[--]force]}] [[--]noprompt]
Kills a Data Pump job.
{ -l | --list } [<schema>] [{[--]active|[--]idle}] [{-v|[--]verbose}]
[-opt schema=<schema>]
Lists Data Pump jobs of all or one schema.
{ -m | --modify } [<schema>.]<dp job> [[-opt] <execution option>]
Modifies some execution parameters of Data Pump job.
{ -r | --restart | --continue } [<schema>.]<dp job> [<service name>]
[[--]skip_current]
[[-opt] service_name=<service name>]
Restarts a previously stopped or killed job.
{ -s | --stop | --suspend } [<schema>.]<dp job>
[{ [--]keep | [--]drop [force] }] [[--]noprompt]
Stops a Data Pump job.
{ -u | --user } [<user>[,...]] [[-opt] grants=<grant type>[,...]]
[-opt user=<user>[,...]]
Get the DDL to recreate accounts with, possibly, roles, system grants and
object grants.
{ -w | --where } [<schema>.]<dp job> [[--]safe] [[-opt] <display option>]
Displays what a job has already completed and what it is currently working on.
and <logon> is:
{ <username>[/<password>][@<connect_identifier>]
| /[@<connect_identifier>] }
| <proxyuser>[<username>][/<password>][@<connect_identifier>]
| [<username>]/[@<connect_identifier>] }
[AS SYSDBA]
in the 2 lines before the last one, [] around <username> are real [] not syntactical
characters denoting an optional parameter. These lines refer to proxy connection.
<logon> must be the first parameter if not introduced by the USERID keyword (see below).
Notes:
* If <username> is not the same than <schema>, <username> must have the
"DATAPUMP_EXP_FULL_DATABASE" or "DATAPUMP_IMP_FULL_DATABASE" role (depending on
the command or job type). If it has "SELECT_CATALOG_ROLE" role or
"SELECT ANY DICTIONARY" privilege it can also have more information about its
own Data Pump jobs.
* About "-cmd" command, if the job was not launched using expdp or impdp or if
CLIENT_COMMAND was given, McDP gives this later one and not the original command.
* [directory] in "--content", "--ddl" and "--display" commands is an Oracle directory;
if it is not given and <file> is not a complete file path, the directory given with
the "DIRECTORY" option is taken and if there is none, DATA_PUMP_DIR is taken.
* If full path files are given and the directory they are in does not exist, the
account must have the "CREATE ANY DIRECTORY" privilege to be able to create a
temporary Oracle directory to the files.
* You can restrict the "--list" to active or idle Data Pump jobs using the "ACTIVE"
or "IDLE" option (see below). You can also restrict to a schema using the <schema>
parameter or the "-opt schema=<schema>" option.
* You can constrain the service on which a job is restarted using the <service name>
parameter or the "-opt service_name=<service name>" option.
* If you use "DROP" option on "--kill" or "--stop" command the master table is dropped
and the job cannot be restarted later (default option is KEEP). If you use "--kill"
command, the job and its workers are killed and the job may or not be restartable,
even if DROP is not given, and some parts of the job may have to be rerun. If you use
"--stop" command, the workers are allowed to finish their current task before ending.
* If you use "KEEP" option on "--content" or "-ddl" command the SQL result file is kept.
* "SHOW" indicates if you want the result SQL file to be displayed on the screen or not.
* "SAFE" option is useful in commands that may create some objects (type, procedure)
(if the current account has the privileges to) to display more information;
this option lays down that McDP can't create any object.
* In "COPY" command, if you don't give the target schema, the source one is taken;
if you don't give the source schema, the current user is taken.
* There can be several "-opt" parameters, see "OPTION" below.
Usage 2: mcdp.pl [KEYWORD=<value> [...]]
The available keywords and their descriptions follow.
ACTIVE={YES|NO} Displays only active jobs for LIST command.
COMMAND=<command> Command to apply this can be ALTER, COMMAND, CONTENT,
COPY, CONTINUE, DDL, FOLLOW, DISP[LAY], KILL, [LIST],
MOD[IF[Y]], RESTART, STOP, SUSP[END], USER or WHERE.
DIRECTORY=<Oracle directory> Oracle directory for CONTENT, DDL and DISPLAY commands.
DROP={YES|NO} Drops master table, for KILL and STOP/SUSPEND commands.
Drops SQL result file for CONTENT and DDL commands.
DUMPFILE=[<dir>:]<file>[,...] File(s) to analyze.
FORCE={YES|NO} Forces drop of master table (default is NO).
HELP={YES|NO} Displays this help (default is NO).
FROMUSER=<source schema> Gives the source schema for COPY command.
IDLE={YES|NO} Displays only idle jobs for LIST command.
JOB=[<schema>.]<dp job> Data Pump job on which to apply the command.
KEEP={YES|NO} Keeps master table, for KILL and STOP/SUSPEND commands.
Keeps SQL result file for CONTENT and DDL commands.
NETWORK_LINK=<dblink> Name of remote database link to the source database.
NOPROMPT={YES|NO} Do not prompt for command confirmation
(default is NO that is ask for confirmation).
OBJECT=<object>[,<object>...] Object lists for DDL command.
SAFE={YES|NO} Lays down to create no new objects.
SERVICE_NAME=<service name> RAC service name on which the job will run.
SHOW={YES|NO} Displays the generated SQL file for DDL and CONTENT.
SKIP_CURRENT={YES|NO} Skips current step when restarting an import.
SOURCE=<source schema> Gives the source schema for COPY command.
SQLFILE=[<directory>:]<file> SQL result file
TARGET=<target schema> Gives the target schema for COPY command.
TOUSER=<target schema> Gives the target schema for COPY command.
TYPE=<type> Data Pump session type: FULL, SCHEMA or TABLE.
USER=<user>[,...] Account name(s) for USER command; joker characters
(_%) are expanded.
USERID=<logon> User credentials to connect (see above).
VERBOSE={YES|NO} Sets a verbose output
<miscellaneous options> Syntax is <option>=<value> where <option> and <value>
depend on the command and are described below.
Notes:
* "SUSPEND" and "STOP" are synonymous as well as "RESTART" and "CONTINUE", and
"ALTER" and "MODIFY".
* "SOURCE" and "FROMUSER" are synonymous as well as "TARGET" and "TOUSER".
* Valid but irrelevant options for a command are (almost always) ignored (no error)
whereas invalid options return an error.
* Default value for "DIRECTORY" is DATA_PUMP_DIR.
* "DUMPFILE" can be a full path file name or a file inside the <dir> or "DIRECTORY" value.
* Depending on the command there can be or not multiple "OPTION" values among possible ones.
* For "ALTER" and "MODIFY" commands (execution options):
ADD_FILE Adds dumpfiles to dumpfile set (export only), syntax:
[<file size>#][<directory>:]<file>[,...]
PARALLEL Changes the number of active workers.
REUSE_DUMPFILES Overwrites destination dump file if it exists
(export only), syntax: { YES | NO }
* For "CONTENT", "COPY" and "DDL" commands (filter and transform options):
CLIENT_COMMAND Assigns a text to the current session
VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
DISABLE_ARCHIVE_LOGGING Specifies whether to disable archive logging: YES/NO
EXCLUDE Gives an expression for objects to exclude, syntax:
OBJECT_TYPE[:"EXPRESSION"][,OBJECT_TYPE[:"EXPRESSION"]...]
INCLUDE Gives an expression for objects to include, syntax:
OBJECT_TYPE[:"EXPRESSION"][,OBJECT_TYPE[:"EXPRESSION"]...]
INMEMORY Indicates if IN MEMORY has to be specified or not, syntax:
VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
INMEMORY_CLAUSE Specifies an IN MEMORY clause for, syntax:
TEXT[:OBJECT_TYPE][,TEXT[:OBJECT_TYPE]...]
LOB_STORAGE Specifies a LOB STORAGE clause for tables, possible
values: SECUREFILE, BASICFILE, DEFAULT, NO_CHANGE
NAME Gives the name pattern of the displayed objects, syntax:
OBJECT_TYPE:"EXPRESSION"[,OBJECT_TYPE:"EXPRESSION"...]
OID Indicates if OID has to be included or not, syntax:
VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
PARALLEL Gives a degree of parallelism to the analysis
PARTITION Gives an expression to filter partitions, syntax:
[[SCHEMA.]TABLE:]EXPRESSION
PCTSPACE Indicates if PCTSPACE has to be included or not, syntax:
VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
REMAP_DATAFILE Indicates how to remap a datafile, syntax:
OLD_VALUE:NEW_VALUE[,OLD_VALUE:NEW_VALUE...]
REMAP_SCHEMA Indicates how to remap a schema, syntax:
OLD_VALUE:NEW_VALUE[,OLD_VALUE:NEW_VALUE...]
REMAP_TABLE Indicates how to remap a table, syntax:
OLD_VALUE:NEW_VALUE[,OLD_VALUE:NEW_VALUE...]
REMAP_TABLESPACE Indicates how to remap a tablespace, syntax:
OLD_VALUE:NEW_VALUE[,OLD_VALUE:NEW_VALUE...]
SCHEMA Gives the name pattern of the displayed schema, syntax:
OBJECT_TYPE:"EXPRESSION"[,OBJECT_TYPE:"EXPRESSION"...]
SEGMENT_ATTRIBUTES Indicates if segment attributes have to be included or not,
syntax: VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
SEGMENT_CREATION Indicates if segment creation clause has to be included or
not, syntax: VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
SOURCE_EDITION Indicates the source edition of the objects.
STORAGE Indicates if STORAGE clause has to be included or not,
syntax: VALUE[:OBJECT_TYPE][,VALUE[:OBJECT_TYPE]...]
TABLE_COMPRESSION_CLAUSE Specifies a COMPRESSION clause for tables, See Oracle
Database SQL Language Reference for more information
about table compression options and syntax, for example
"COMPRESS BASIC".
TABLESPACE Gives the name pattern of the displayed tablespace, syntax:
OBJECT_TYPE:"EXPRESSION"[,OBJECT_TYPE:"EXPRESSION"...]
TARGET_EDITION Indicates the target edition of the objects.
VIEWS_AS_TABLE Indicates if a view has to be converted to a table, syntax;
[SCHEMA_NAME.]VIEW_NAME[:TABLE_NAME][,...]
"EXPRESSION" can be any SQL expression allowed in a SQL WHERE clause.
"VALUE" can be YES or NO.
"TEXT" can be any valid clause text for the transformation (see Oracle documentation).
* For "FOLLOW" command (display options):
TIMEOUT Number of minutes without any new message before a new
status message is displayed by McDP (default is 5 minutes).
TIMEOUTMAX Maximum value for time-out (default is NULL which is
unlimited, 0 is also unlimited).
TIMEOUTMF Multiplying factor to apply to TIMEOUT when TIMEOUTNB
times-out have been raised.
TIMEOUTNB Number of consecutive times-out before a multiplying
factor is applied (default is 5).
To avoid too many messages, when TIMEOUTNB times-out have raised, the time-out time
is multiplied by TIMEOUTMF until it reaches or exceeds TIMEOUTMAX.
* For "LIST" command (filter option):
SCHEMA Restricts the list of Data Pumps to a schema.
* For "USER" command (grant options):
GRANT Gives the grant types that should be included in the script,
could be one or several among ROLE, SYSTEM and OBJECT, or
ALL for all these ones.
* For all commands (display option):
LINESIZE Sets the total number of characters that McDP displays on
one line before beginning a new line (default is 120).
If <logon> is not given, McDP asks for it.
The program is provided as it is without any guarantees or warranty. Although the
author has attempted to find and correct any bugs in this free program, the author
is not responsible for any damage or losses of any kind caused by the use or misuse
of the program. The author is under no obligation to provide support, service,
corrections, or upgrades to this program.
You can freely use, copy and distribute this program but you can't modify it without
the permission of the author you can contact on http://www.orafaq.com
You can post your comments, ask for improvements, report bugs... on the program at
http://www.orafaq.com/forum/t/201760/

As you can see, the program supports 2 types of usages: a Unix style one with "-<opt> <value>" parameters and another one similar to the expdp/impdp program with "KEYWORD=<value>" parameters. You can even mix both syntax.Note: If you want to alter others Data Pump jobs you must have the DATAPUMP_EXP_FULL_DATABASE or DATAPUMP_IMP_FULL_DATABASE roles, if you want to see information about others Data Pump jobs, you must have SELECT privileges on some DBA_DATAPUMP and V$ views or have the SELECT_CATALOG_ROLE role or the SELECT ANY DICTIONARY privilege. Note: You can give files within an Oracle directory or with an absolute path. In this later case, if the file path matches an Oracle directory you have access there is no problem otherwise McDP will try to create a new Oracle directory for this path, if you have the privilege to do so. The directory is dropped before the program leaves.Note: Some commands can give more information if you have CREATE TYPE and/or CREATE PROCEDURE privileges, if so the program will create objects to execute the command and remove them at the end. If you don't want McDP creates these objects give the "-safe" or "SAFE=YES" option.Note: McDP has currently not been tested on version 12c but it is most likely it works, waiting for your feedback if you see anything wrong...

"--active" or "ACTIVE=YES" restricts the result to running Data Pump jobs (default is "ACTIVE=NO").

"--idle" or "IDLE=YES" restricts the result to not running Data Pump jobs (default is "IDLE=NO").

"-v", "--verbose" or "VERBOSE=YES" adds some information about the Data Pump jobs and their sessions.

Of course, ACTIVE and IDLE parameters are exclusive.Note: If you don't give database credentials, McDP will ask you for them.Note: The line size can be adjusted as you want using the LINESIZE=<n> parameter. The default is 120.

"Operation" column gives the type of Data Pump job; it can be "EXPORT", "ESTIMATE", "IMPORT", "SQL_FILE" or "NETWORK"

"Mode" column gives the mode of the Data Pump job; it can be "FULL", "SCHEMA", "TABLE", "TABLESPACE" or "TRANSPORTABLE"

"Par." column gives the parallelism degree

"Ses." column gives the number of sessions interested in the data Pump job: expdp/impdp or attached program sessions, master session, workers sessions (here 3: the expdp program, the master and one worker)

"Att." column gives the number of attached sessions (here 1: the expdp program)

We see that the Data Pump job is waiting for the tablespace SCOTT2 to be extended when it is working on the import of table SCOTT2.T.
This is useful when you are a DBA and some user is complaining its export is "hanging".

Note: If you don't provide the schema, it is yours. You must be a privileged user to see others jobs.Note: The Data Pump job can be currently running or not.

Examples where MICHEL is a privileged account

Display a job following its execution
For the job we followed in the previous post with the LIST command we could get with the DISPLAY one the following output.
Remind: the command which launched the export was:

This case is different from the previous ones.
First the mode is SQLFILE, that is neither export nor import (although Oracle does it using impdp) but analysis of a dump returning the SQL statements residing in it.
Then it has not been created by expdp or impdp but by McDP, you can see this from the "Client command" data, it is here:

This is what generates McDP for a CONTENT command, we'll see below, saying it analyzes the content of the dump made of the 2 mentioned files and returning the result in the given SQL file. We can see that the job was stopped. Such a job is not restartable which means if you stop it you lose all the work that has been done, and we can't get any information from the Data Pump engine which refuses our connection for this job.

Note: This command can give more information if you have the privileges CREATE TYPE and CREATE PROCEDURE it needs to build some result, these objects are removed after the command execution. If you have these privileges but don't want McDP creates any object use the "--safe" or "SAFE=YES" option.

Examples where MICHEL is a privileged account

Display information during an export of several schemas
The export command was:

We see all the steps that have been done with the start and end times and the current operation of the workers. Here Worker 1 is executing the step "export of table data" (DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA), has already done 16 out 60 tables (that is 26.7% of them) and is working on table STAGE8.T. Worker 2 is waiting for the Master to send it work. Master only works on metadata and send work to Workers.

Display information during an import of several schemas
We import the schemas exported above, the import command was:

Here, in addition to the information we have for an export, we also have the information about what all workers have already done: each step with the number of items and the first 3 lines of the items list.Note: this later information is displayed only if McDP can create the type and function it needs for it, otherwise you only have the number of items without the list.

We see the Master has done a couple of new steps, workers have completed new tasks and are working on constraints now. We have the time the current work (building constraint STAGE0.SYS_C0056783) has started.

But if you just want the command line of a Data Pump job it would be faster to execute the COMMAND command than to ask the user who launched the job or your memory, above all if it is an old stopped job.

Note: If you don't provide the schema, it is yours. You must be a privileged user to stop others jobs.Note: STOP and SUSPEND are synonymous.

"--keep" and "KEEP=YES" parameters mean the job Master table is kept; this is the default option.

"--drop", "DROP=YES" and "KEEP=NO" parameters mean the job Master table is dropped after the job is stopped.

"--force" and "FORCE=YES" parameters can only be used with "--drop" or "DROP=YES" and mean the Master table is dropped even if we can't properly stop the job and drop the Master table the using Data Pump engine (in this case a DROP TABLE is executed, PURGE option is not used). The default is "FORCE=NO".

The command starts to display few information about the Data Pump job and, unless you give the parameter "--noprompt" or "NOPROMPT=YES", it will prompt you to confirm the action before stopping the job.
The difference between STOP/SUSPEND and KILL is that the former let the workers complete their current task before stopping whereas the later aborts them. This means if you use KILL and the job is restartable and restarted (see next post), the current tasks at KILL time have to be completely reexecuted and may lead to errors like "ORA-00001: unique constraint violated".

Example where MICHEL is a privileged account
We will execute an import job we have already used in the previous posts: import exported SCOTT schema into SCOTT2 remapping the tablespace:

Note: If you don't provide the schema, it is yours. You must be a privileged user to restart others jobs.Note: CONTINUE and RESTART are synonymous.

"<service_name>" and "SERVICE_NAME=<service_name>" parameters allows you to restart the job, in a RAC environment, in an instance supporting this specific service.

"--skip_current" and "SKIP_CURRENT=YES" parameters, only valid for an import, allow to restart a Data Pump skipping the steps the workers were executing when the job was killed or failed. This is useful when you want to skip a task which is taking too much time or resources or continuously fails.

Example where MICHEL is a privileged account
We will restart the import job we stopped in the previous post.

Sometimes we would like to modify some parameters of a Data Pump job, for instance parallelism to decrease it because the job is taking too much resources or to increase it to speed up the job. This is possible with McDP using the ALTER/MODIFY command.

Note: If you don't provide the schema, it is yours. You must be a privileged user to alter others jobs.Note: ALTER and MODIFY are synonymous.

Currently the execution options you can give are the following ones:

ADD_FILE=[<file size>#][<directory>:]<file>[,...] to add dumpfiles to dumpfile set (export only);
<file size> is the size the file will have,
<directory> is the Oracle directory for the file.
PARALLEL=<n> to change the degree of parallelism of the job.
REUSE_DUMPFILES={YES|NO} to change the setting of the same parameter in expdp program
(export only and only if the program is still in definition step).

Note: if you give a size, the file is preallocated with this size, if you don't give any size, the file can grow (from 0) without any limit (but an OS one).Note: you can use the "DIRECTORY=<Oracle directory>" parameter to specify a default directory for the new files.

Example where MICHEL is a privileged account
We will launch an export and then modify its parameters, the export command is the following one:

We see we have 2 workers, 2 dump files and 58MB was already written into the second one.

Now we alter some parameters:
- change the REUSE_DUMPFILES to YES
- add 2 files in 2 different directories, one in MYDIR with a size of 100MB and one in DATA_PUMP_DIR with an unlimited size
- change the parallelism to 4

We see we can't change REUSE_DUMFILES as the job is already executing.
The second file goes to DATA_PUMP_DIR directory because it is the default directory if none is given with the file name or with a DIRECTORY parameter.

Among other information we can see that these files have been generated by a Data Pump job named "MICHEL"."SYS_EXPORT_SCHEMA_05" on the instance "mikb2" on "Sat Aug 20 16:37:24 2016" and they are files number 1 and 2 of the dump.

Now have a look at the first added dump file in the previous post: mydir:new_file.dmp.

We can see that this file is number 3 and it contains the Master table of the Data Pump job ("Master table present: Yes"); this Master table is "cut" in 1 piece ("Master piece count") and this is piece number 1 ("Master piece number").

Now I want to know what's inside the dump, more I want the DDL, more I want not all but some of these DDL and I want to apply some transformation on them.
This is possible with McDP using the CONTENT command.

"-show" and "SHOW=YES" parameters indicate you want McDP displays on the screen the content of the result file.

"-keep" and "KEEP=YES" parameters indicate you want to keep the result file.

"-drop" and "DROP=YES" parameters indicate you want McDP removes the result file.

The default values are: SHOW=YES, KEEP=NO and DROP=YES but if you choose KEEP=YES or DROP=NO default value for SHOW becomes NO.Note: If you choose to give DROP=YES option, SHOW is forced to YES. Note: The filters and transformations you can give are the same ones you can give using impdp program (also adding some syntax this later does not accept). A list is given in the help at top of this topic, the detailed descriptions can be read in Data Pump documentation within Oracle Database Utilities book.

Examples where MICHEL is a privileged account

We want to get in a file all the DDL from the export of SCOTT schema we have seen in a previous post, excluding the statistics:

Now we have a dump of schema "MICHEL" and we want tables "XXX" and "TTTT" but only if they are in tablespace "ORAFAQ" and we want to remap the schema to "SCOTT" and tablespace to "TS_D01", without the statistics.

Table "TTTT" exists in the dump but is not in the proper tablespace so it is skipped.
Note that McDP records in the job it creates the parameters you gave, translating them into valid Data Pump expressions, and displays them so you can check there were no problems and this is what you want. This starts with "<USER>: Display content of file(s):"

Now we want, from the same dump, all the procedures starting with "P" (but procedure "PRINT_TABLE") from source edition "ORA$BASE" remapping them in schema "SCOTT" and in edition "TOTO":

"[<directory>:]<file>" and "SQLFILE=[<directory>:]<file>" parameters give the SQL result file.

"<dblink>" and "NETWORK_LINK=<dblink>" parameters give the database link through which the Data Pump engine will inquire the "remote" database, even if you want DDL from the database on which you are connected and with the same user, you must provide a database link, this is a restriction of the Data Pump engine.

"<type>" and "TYPE=<type>" parameters indicate the Data Pump job type, it could be "FULL", "SCHEMA" or "TABLE", just like for an export using "expdp" program you must provide one (and only one) of the parameters "FULL", "SCHEMAS" or "TABLES", this is the mode you'll find in USER_DATAPUMP_JOBS.JOB_MODE column.

"-show" and "SHOW=YES" parameters indicate you want McDP displays on the screen the content of the result file.

"-keep" and "KEEP=YES" parameters indicate you want to keep the result file.

"-drop" and "DROP=YES" parameters indicate you want McDP removes the result file.

The default values are: SHOW=NO, KEEP=YES and DROP=NO.Note: If you choose to give DROP=YES option, SHOW is forced to YES. Note: The filters and transformations you can give are the same ones you can give using impdp program (also adding some syntax this later does not accept). A list is given in the help at top of this topic, the detailed descriptions can be read in Data Pump documentation within Oracle Database Utilities book.

Examples where MICHEL is a privileged account

The first example will extract the CREATE USER statement for MICHEL account as well as its roles and system privileges, the result will be displayed and stored in "michel.sql" file:

As for the CONTENT command, McDP displays the translation of the options you gave into Data Pump engine syntax; this the part starting with "<user>: get DDL from <dblink>".
We will see another way, faster, later, in a couple of posts: the command USER.

The second example will extract SCOTT account definition and all its tables but "T%" ans 'SYS%' ones, without the statistics and the storage parameters, the result will be displayed and stored in "scott.sql" file:

Now that we have seen how to get information about Data Pump jobs, active or stopped, a new point reaches: it would be nice to see what a running Data Pump job is doing when it is doing it, either because the expdp or impdp which launched it is no more there or because, as a DBA, you see a Data Pump job running and you are concerned about it, is it possible?
The answer is: Yes, with McDP using FOLLOW command.

We can see that the first thing McDP FOLLOW command does is to display information about the Data Pump job and its status. Here the job is still in DEFINING step which means it is doing its initialization and did not actually start the import process (the worker is waiting for the master to send it some work). The "Starting" message appears a second later as you can see.
(Ignore the "Terminating on signal SIGINT(2)" message, it is there because I aborted McDP session using Control-C.)

After terminating the FOLLOW command, we launch a WHERE one to see where the Data Pump job is:

We can better see here the progress of the job with the "percent done" lines, it is a percentage of the reading of the dump files not in number of imported objects.

At this point let me explain these "timeout*" parameters.
As Data Pump engine only reports when something is completed or an error occurs, it is possible that it does not report anything during minutes or even hours for very long operations. So to avoid we don't know if the job is hanging somewhere or it is executing something, I set this timeout mechanism. If the Data Pump did not report something for some time, the time-out one, then McDP displays the current status of each worker: what operation it is doing, how much it has already done, what it is currently working on. This is the purpose of the first parameter "timeout" which set an initial value for the time-out, in minutes. Here, we set it to 1 minute which means if the Data Pump engine did not report a message within a minute McDP displays some status messages. This is what happened there:

The statistics computation has started at 18:07:46.687 and lasts more than a minute so at 18:08:46.718 McDP displayed a message saying that "Worker 1" (the only one in this case) has completed 68/219 of the tables and is currently working on INCOMING table.
Now the execution time could be so long that we can have hundred of such status lines which will fill the screen and may make us lose some important messages.
So I introduced 2 other parameters: a threshold (timeoutnb) and a multiplying factor (timeoutmf), when the number of time-out exceeds the threshold then time-out time is multiplied by the factor. (We can't actually see this happening in my example as the steps are not long enough.)
The last parameter, timeoutmax, set a maximum time for the time-out to increase, if we want to have not too much messages but at least a status each "timeoutmax" minutes.
When the Data Engine reports a new message, the time-out time is reset to its initial value.

The last command of the first version of McDP is not a real Data Pump command as it does not use its engine but it is a useful one for a requirement that is often asked to DBA: get the DDL to recreate one or several accounts, possibly with its privileges.

"<dblink>" and "NETWORK_LINK=<dblink>" parameters give the database link through which the Data Pump engine will inquire the "remote" database, even if you want to copy into the database on which you are connected and with the same user, you must provide a database link, this is a restriction of the Data Pump engine.Note: If you don't give the target schema, the source one is taken; if you don't give the source schema, the current user is taken.

Examples where MICHEL is a privileged account

We will copy "SCOTT" schema into "SCOTT2" but "T%" and 'SYS%' tables, remapping the tablespace.
First we check there is no objects in SCOTT2 schema:

I advise those who have downloaded the previous version to download this new one to avoid some pending or running Data Pump jobs: it was like expdp/imdp when you killed the client program the server process still runs, now when you kill McDP it cleans the job (Control-C or kill -15, of course not kill -9 which means immediate abort).