If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Restart required when editing oracle tnsnames.ora?

Hello,
I am currently trying to export a dump file from oracle 10G which will be imported into an oracle 9i instance on another server. There appears to be a bug in SQLDeveloper which chops the last column off each table when an SQL insert or SQL*loader file is generated.

Presumably this is a valid approach to performing this data transfer, if not could someone please offer details as to why it's not?

The reason for this post is because the oracle (9i) instance into which i want to import the data has a number of other database instances and some other members of the development team are performing ongoing operations which i absolutely cannot interrupt or affect. My question, therefore, is: Is it safe to edit tnsnames.ora as outlined above, without having to restart oracle/interrupt operations on the other database instances/have any of the other instances know about the change? To put it another way, what are the consequences and required steps to perform after a tnsnames.ora edit in order to get the dbms to recognise it? Will saving my edits result in the exp command detailed above working correctly?

You don't need to restart anything after a change to the tnsnames.ora file. The worsth thing that can happen is you alter an existing entry, making futuer connections fail, but as long as you are careful, it shouldn't be a problem.

My advice is get someone to test the existing connection as soon as you save the change, and be ready to revert back in case of a problem.

Using this method is OK, but it's quicker to export on the 10g server, compress it, FTP the file to the 9i server, expand it and then import it. exporting across the network is a bit bandwidth hungry. If network bandwidth isn't the problem then go for it

9i exp on a 10g oracle instance

Hi Tim,
thanks for the advice. Network bandwidth is likely to cause some difficulty but currently if i can get it working i'd be happy.

Regarding performing the export on the 10G server, presumably you mean using the 9i version of exp (i've found a few forums on the Web where people say you must use the exp from the earlier version when going backwards, e.g. from 10g to 9i)?

If so, can you give me any pointers on the steps required to get the 9i exp command working on the 10g server?

I tried just copying over the 9i bin and lib directories and setting the LD_LIBRARY_PATH variable to point to the copied lib directory (because i got a 'shared library libclntsh.so.9.0 not found' error when running the exp command originally.

I realise the above is probably missing several steps or is naive in its approach, so could you tell me if there's additional directories i need to copy over or generally if its possible to port 9i's exp over to the 10g server without having to install an entire 9i instance?

Hi Tim,
Regarding performing the export on the 10G server, presumably you mean using the 9i version of exp (i've found a few forums on the Web where people say you must use the exp from the earlier version when going backwards, e.g. from 10g to 9i)?

You're quite correct. I got caught up in the transfer speed and forgot about the version difference. For erference, the matrix is here:

10G to 9i exp/imp success

FYI, i followed your advice and everything worked smoothly. On the 9i server, I edited tnsnames.ora to add a connection descriptor for the database on the 10G server, which caused no problems with the other tablespaces as i was fearing.

Then, as suggested and still on the 9i server i issued the command 'exp username/password@TNSENTRY ' , where are the basic exp options such as file=, etc. and TNSENTRY is the name given to the connection descriptor in tnsnames.ora, i.e. TNSENTRY = ( DESCRIPTION = (ADDRESS_LIST= .. ) (CONNECT_DATA= ... ) ).

That led to the 10G data being exported to a dump file on the 9i server, which could be imported as normal using the 9i version of the imp utility.