Herman Buitenhuis – AMIS Oracle and Java Bloghttps://technology.amis.nl
Friends of Oracle and JavaMon, 14 Aug 2017 20:12:38 +0000en-UShourly1https://wordpress.org/?v=4.8.133847652Sometimes the cause of a TNS error is ….https://technology.amis.nl/2014/09/03/sometimes-cause-tns-error/
https://technology.amis.nl/2014/09/03/sometimes-cause-tns-error/#respondWed, 03 Sep 2014 15:14:54 +0000http://technology.amis.nl/?p=32214A couple of months ago one of my customers had a failed data ware house report. There was a ORA-12592 (TNS) error message generated. I turned out not to be the only TNS error. During a couple of weeks similar TNS-errors were generated. Not only the ORA-12592 error but also ORA-12514 and ORA-12571 errors. We [...]

]]>A couple of months ago one of my customers had a failed data ware house report. There was a ORA-12592 (TNS) error message generated.
I turned out not to be the only TNS error. During a couple of weeks similar TNS-errors were generated. Not only the ORA-12592 error but also ORA-12514 and ORA-12571 errors.

We did some extensive sqlnet tracing but we didn’t find the cause there.

Finally the cause turned out to be a very simple and stupid one: the maximum number of processes was reached…. But strangely there was no ora-00020 error generated and found in the alert.log. I should expect such an error in the alert.log when this happens. But it seems that Oracle is not always doing that… (?)

Oracle has a parameter called processes. By default this is set on 150 during creation of the database. You can of course set another value during database creation. If you set this parameter on 200 then if there are more than 200 processes then a new connection can not be made with the database and the user gets some sort of TNS error like mentioned above.

How can you find out that this is the case? You can find out using the following query:

You can see that the max_utilization is 200 which is the same as the limit_value and the processes parameter. This means you have reached at least one time the maximum number of processes. If also the current_utilization is the same as the limit_value or nearby the limit_value then new users will get TNS errors during login. To solve this you should set the processes parameter on a higher value.

You can do that by the following command:

alter system set processes=300 scope=spfile;
and then restart the database.

The processes parameter is not a dynamic parameter so you are not able to change this parameter without a restart of the database.

So next time when I get TNS errors I will take a look in the v$resource_limit view first, before starting with extensive sqlnet tracing…

]]>https://technology.amis.nl/2014/09/03/sometimes-cause-tns-error/feed/032214How to make a time consistent export dump using the expdp datapump utilityhttps://technology.amis.nl/2014/09/03/make-time-consistent-export-dump-using-expdp-datapump-utility/
https://technology.amis.nl/2014/09/03/make-time-consistent-export-dump-using-expdp-datapump-utility/#respondWed, 03 Sep 2014 14:08:13 +0000http://technology.amis.nl/?p=32192In those old days when there was the exp utility we made a time consistent export dump by using the consistent=y parameter. But today, in fact a couple of years already, we mostly use the expdp datapump utility. How should we make a time consistent export using datapump? For that we use the flashback_time or [...]

]]>In those old days when there was the exp utility we made a time consistent export dump by using the consistent=y parameter.
But today, in fact a couple of years already, we mostly use the expdp datapump utility. How should we make a time consistent export using datapump?
For that we use the flashback_time or flashback_scn parameter. In this post I show you how to set the flashback_time parameter.

The flashback_time parameter needs as input the date-time in the “timestamp” format. If you want a time consistent exportdump of the present time, you should therefore set this parameter as follows:

flashback_time=systimestamp

If you want to use a parameter file, you should make a file with for example this content and give it for example the name scott.par:

schemas=scott
dumpfile=exp_scott.dmp
logfile=exp_scott.log
directory=DATA_PUMP_DIR
flashback_time=systimestamp
..
You then can execute the export using:

expdp system/password parfile=scott.par

If you want a time consistent export on another timestamp, let say september 3rd 2014 on 14:41:00 then you should set the flashback_time parameter as follows:

From version 11.2 and higher it is also possible to use the so called legacy mode: you can use the parameters from the old exp utilities! You can use the consistent=y parameter again to make a time consistent export:

]]>https://technology.amis.nl/2014/09/03/make-time-consistent-export-dump-using-expdp-datapump-utility/feed/032192Sqlnet tracing during nightly hours…https://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/
https://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/#commentsTue, 26 Aug 2014 13:47:25 +0000http://technology.amis.nl/?p=32038A TNS error at night… Sometime ago my data warehouse colleague came to me with a TNS error. At night times he runs his batch jobs in order to update his data warehouse. That night one of his jobs did not run properly and generated an ORA-12592 error. He had to restart this job during [...]

Sometime ago my data warehouse colleague came to me with a TNS error. At night times he runs his batch jobs in order to update his data warehouse. That night one of his jobs did not run properly and generated an ORA-12592 error. He had to restart this job during daytime.

It turned out it was not the only occurrence of this TNS error. A couple of days later he again came to me with similar TNS errors which were generated at a similar time. I looked in the alert.log and in the listener.log but nothing could be found. Therefore I decided to switch on sqlnet tracing in order to find out what was happening. However sqlnet tracing generates a lot of data. The TNS errors were generated at night. It is not a good idea to switch on sqlnet tracing during daytimes and then come back the next day and switch it off. You will probably get disk full problems!

Therefore I decided to make some scripts. Using crontab or windows scheduler I switch on sqlnet and listener tracing some time before the TNS error normally occurs and switch it off some time after. I would like to share with you the way I did it.

My configuration to trace.

We run an oracle 11.2.0.4 database on an Oracle Linux 6 server. Our client computer is a windows server computer. On this client some data warehouse tools are installed and run from this client. Also oracle 11.2 client software is installed on that client.

How to switch on sqlnet tracing

I set sqlnet tracing on 3 levels: client level, server level and listener (also on server) level. Sqlnet tracing on the client level can be switched on by setting parameters in the sqlnet.ora file on the client computer. On the server level you have to set parameters in the sqlnet.ora on the server. Setting parameters in the listener.ora file switches on listener tracing. These files can be found in the $ORACLE_HOME/network/admin directory.

Setting sqlnet tracing on the server:

On the server I copied the sqlnet.ora file to a file with the name sqlnet.ora.off. I made another copy of sqlnet.ora and gave it the name sqlnet.ora.on. Both files were put in the $ORACLE_HOME/network/admin directory, the same directory as for the original sqlnet.ora. I edited the sqlnet.ora.on file and added the following parameters to this file:

TRACE_LEVEL_SERVER = 16
You can set the level of tracing with this parameter. I used the highest level. But it could be a good idea to start with a lower level for example 4 or 6. Higher levels produce more data and therefore more gigabytes.

TRACE_DIRECTORY_SERVER = /u03/network/trace
LOG_DIRECTORY_SERVER = /u03/network/log
I decided to use another mountpoint than the default in order to prevent disk full errors. There was more disk space on the /u03 mountpoint.

TRACE_UNIQUE_SERVER = ON
This causes Oracle to generate for every connection unique trace files.

TRACE_TIMESTAMP_SERVER = ON
If you set this parameter then a timestamp in the form of [DD-MON-YY 24HH:MI:SS] will be recorded for each operation traced by the trace file.

DIAG_ADR_ENABLED = OFF
ADR_BASE = /u01/app/oracle
You should set these two parameters if you are using version 11g or higher. If you use version 10g or lower then you should not add these parameters.

In my first version of the sqlnet.ora.on I also set the parameters:
# TRACE_FILELEN_SERVER = ….
# TRACE_FILENO_SERVER = ….
But it turned out that this was not a very good idea: huge amounts of files were generated. So I decided to throw them out.

Setting tracing on the listener:

I also made a copy of the listener.ora and named it listener.ora.off. I made another copy of this file and named it listener.ora.on. Also these files were put in the $ORACLE_HOME/network/admin directory. I edited the listener.ora.on and added the following parameters:

On the server you switch on sqlnet and listener tracing by the following command:

./sqlnet_trace_on.sh

You can switch off tracing by:

./sqlnet_trace_off.sh

On the client you can run the scripts sqlnet_trace_on.cmd and sqlnet_trace_off.cmd. However there is an important thing to say: Because of windows security, you should run these scripts in a cmd box with “run as administrator”! If you don’t do that you get “Access is denied” errors.

Switching on sqlnet tracing automatically

Using crontab you can automatically switch on and switch off sqlnet tracing on the server. For example if you want to daily switch on sqlnet tracing on 02:00 and switch it off on 03:00 you add (with “crontab –e”) the following lines to the crontab file:

On the windows client you can use the windows task scheduler to switch on and switch off sqlnet tracing. However because of windows security you can get access denied errors. In order to solve this I had to contact the windows system administrator. He changed the security settings of the %ORACLE_HOME%/network/admin directory. And then it worked without any problems.

I switch on tracing on the client before I did the restart of the listener. So I scheduled the script sqlnet_trace_on.cmd on 01:55 and sqlnet_trace_off.cmd on 02:55.

Using the above script and method I was able to do my sqlnet and listener tracing at night. And also sleep very well!

I would like to thank my colleague Karin Kriebisch. She made the first initial version of the script.

]]>https://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/feed/532038How to stop running RMAN jobs in OEM Grid Controlhttps://technology.amis.nl/2008/02/26/how-to-stop-running-rman-jobs-in-oem-grid-control/
https://technology.amis.nl/2008/02/26/how-to-stop-running-rman-jobs-in-oem-grid-control/#commentsTue, 26 Feb 2008 15:31:03 +0000http://technology.amis.nl/blog/?p=2892Life became very easy after Oracleâ€™s invention of OEM Grid Control. That is what Oracle promised us when they invented it. A couple of months ago one of my colleagues asked me to schedule backup jobs. In the past I made very nice OS scripts in order to make a backup. But now with OEM [...]

]]>Life became very easy after Oracleâ€™s invention of OEM Grid Control. That is what Oracle promised us when they invented it. A couple of months ago one of my colleagues asked me to schedule backup jobs. In the past I made very nice OS scripts in order to make a backup. But now with OEM Grid Control being available for quite some time, I thought letâ€™s try making backups using OEM Grid Control.

And yes it works fine. Grid Control makes quite interesting RMAN scripts. You can schedule these RMAN scripts. At one glimpse you can see all your backup jobs and the status of these backup jobs in the job activity list. Also you can see if these backup jobs have successfully run. For script kiddies OEM Grid Control is bad news, because it makes scripts for you. But if you like to be wizard kiddy, you feel to be in heaven.

But after some while, life with OEM Grid Control turned out not so nice. After some proper backup runs, I found a backup job which remains status running. The next day a new backup job for the same database was automatically started. But this job got immediately status â€œ1 problemsâ€. It turned out that the job was not started because Grid Control says: â€œAn execution in one of the previous runs of the job was still running.â€

So I thought: letâ€™s stop the running job. So I did. Then Grid Control told me: "The job execution was stopped successfully. Currently running steps will not be stopped." So I thought: life is easy again. But it turned out that this was not true. The running job got status â€œStop Pendingâ€ and remained this status.

So I thought: letâ€™s kill the running step. So I did. But then Grid Control says: "The step was not killed because it has already completed." But the job remains in status running.

So I thought: letâ€™s delete the job. (what else should you do if OEM Grid Control refuses to listen). So I did, but then OEM Grid Control says: "The specified job, job run or execution is still active. It must finish running, or be stopped before it can be deleted. Filter on status ‘Active’ to see active executions."

Doom scenarios as: I will never be able to make a backup anymore of this database, came in my mind.â€ But finally Oracle Support send me this script:

You have to run this script under user sysman on the OEM Grid Control repository database. You can find the name and owner of the job in the job activity list. Using this script I was able to solve my problem. The running job was deleted.

However I had to schedule a new backup job for this database again. This script deletes all runs of this job and so also the next occurrences of this job.

So if you ever run in a similar kind of problem then you can solve it by running this script and schedule a new job again.

Now life is easy again using OEM Grid Control, but not that easy as they promised us at first.