In our databases we have a few external tables which rely on files that can be located in different physical directories, and these directories are being created dynamically.
loading these files is not an issue since we can create directory objects dynamically - and it works great.
The directory names are created as 'AUTO_GENERATED_DIR_n' where n is derived from a sequence - we cannot reuse directory names because of certain limitations.

The problem starts when trying to import the extenral table from an expdp export - they exist in the export with the latest directory object used (in example DEFAULT DIRECTORY AUTO_GENERATED_DIR_1), and then impdp fails to import them to a database where the dynamically created directory objects don't exist (AUTO_GENERATED_DIR_1 doesn't exist).

how can we bypass it?
Is there a way to do a transform - like can be done on types using TRANSFORM=oid:n:type ?
or maybe remap all directory objects to a directory object that i can assure exists in all of our DBs ?

Since i'm not familiar with such switches - I thought about an alternative, but it is also problematic:
change the default directory of all external tables before the export to a directory that exists in all DBs, but for that we must ensure that no one will be changing the default dir during the export or by the time the export is issued (and in 99% of the time this will happen because files are being loaded all the time and we can't stop everything from working).
We can also try to pre-create all possible directory object names in all DBs but i want to avoid it as it is both ugly and not 100% failsafe.

We are creating directory objects dynamically.
Some of the directory objects (most, actually) do not exist in the target database (as can be understood from the example name I have provided NON_EXISTING_DIR_IN_OTHER_DB).
since I have many external tables each defined with its own dynamically created directory object - I'm looking for a way around this - a way for the import not to fail on these tables, and something that can be done automatically. I'm looking to avoid creating all dynamically created directory objects in the target db before the import, as there can be tens of thousands of such directory objects.

Import expects the directory objects to exist on the target. One option could be to just export all of directory objects first using a metadata filter ( http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#i1009903 ), then importing these directory objects into the target, then doing the export/import of needed tables. I have not tried this personally.

seems to exist only in database (full) mode, while we do schema exports and imports.
I was hoping for a remap or transform option. I want to avoid creating all the directory objects in the target DB, because they are not relevant for it.

I guess there's no such option and we are left with the manual solution then?

If these external tables need to be imported into the target, then I do not believe you have any options currently. If they are not needed on the target, then you can use a filter to prevent them from being exported