Menu

Accord Contracts Ltd

Menu

Category Archives: Oracle

With Oracle 12c it is now possible to query patch information directly from SQL.

This is done using the DBMS_QOPATCH package to fetch the output from OPatch as an XML document. We then process this document using xmltabl() to map the attributes to columns and join to DBA_REGISTRY_SQLPATCH. Here’s an example of the output:

It is possible to determine all Patch Set Updates that has been applied to a database using a simple SQL statement – but note that this will only retrieve the details of patchsets that contained a database element (i.e. you had to run a SQL script in as part of the patch).

Note: It is better to use the opatch utility to retrieve the details of all patches and patchsets applied to an environment. However, this query may still prove useful.

Introduction

We were recently creating some custom CRS resources that we need to automatically startup and shutdown to follow the Primary database in a DataGuard configuration. To achieve this, we were going to add start, stop and pullup dependencies between the database service resource and our custom resource.

However, the behaviour of the pullup dependency as described in the Oracle documentation is counter-intuitive and so I decided to do some testing around it to see the results and reassure myself that the documentation was correct.

In short, the documentation states that if resource A has aSTART_DEPENDENCY=pullup(B) dependency then resource B will pull up resource A(i.e. starting B will start A).

This test confirms whether this is indeed the case.

1. Create the test scripts

Firstly, we need to create the necessary action scripts that will be used by our test resources. let’s create a generic action script which can start, stop and check a given script/command:

This guide is intended to aid in establishing the origin of failed database login attempts. Most of the time these failed attempts are due to typos or outdated application configurations and are therefore not malicious, however due to default password management policies setting a limit on failed login attempts this often causes accounts to become locked out for legitimate users.

Note: An ACCOUNT_STATUS of “LOCKED” (in DBA_USERS) means the account was locked manually by the DBA, whereas “LOCKED(TIMED)” indicates the account was locked due to the number of failed login attempts being exceeded (as defined by FAILED_LOGIN_ATTEMPTS in the profile in effect for the user).

The following options are available for collecting information on the origin of failed connection attempts (in order of simplicity):

1. Using database auditing (if already enabled)

Caveat: This is the simplest method to determine the source of failed login attempts providing that auditing is already enabled on your database as the information has (probably) already been captured. However, if auditing is not enabled then doing so will require that the database be restarted, in which case this option is no longer the simplest!

Firstly, check to see whether auditing is enabled and set to “DB” (meaning the audit trail is written to a database table).

show parameter audit_trail

If not set, then you will need to enable auditing, restart the database and then enable auditing of unsucessful logins as follows:

audit session whenever not successful;

The audit records for unsuccessful logon attempts can then be found as follows:

Note: the USERHOST column is only populated with the Client Host machine name as of 10G, in earlier versions this was the Numeric instance ID for the Oracle instance from which the user is accessing the database in a RAC environment.

2. Use a trigger to capture additional information

The following trigger code can be used to gather additional information about unsuccessful login attempts and write them to the database alert log, it is recommended to integrate this code into an existing trigger if you already have a trigger for this triggering event.

3. Setting an event to generate trace files on unsuccessful login.

You can instruct the database to write a trace file whenever an unsuccessful login attempt is made by setting the following event (the example below will only set the event until the next time the database is restarted. Update your pfile or spfile accordingly if you want this to be permanent).

alter system set events '1017 trace name errorstack level 10';

Trace files will be generated in user_dump_dest whenever someone attempts to login using an invalid username / password. As the trace is requested at level 10 it will include a section labeled PROCESS STATE that includes trace information such as :

In this case it was an ‘frmweb’ client running as OS user ‘orafrms’ that started the client session. The section “Call Stack Trace” may aid support in further diagnosing the issue.

Note: If the OS user or program is ‘oracle’ the connection may originate from a Database Link.

4. Using SQL*Net tracing to gather information

A sqlnet trace can provide you with even more details about the connection attempt but use this only if none of the above are successful in determining the origin of the failed login as it will be hard to find what you are looking for if you enable sqlnet tracing (and it can potentially consume large amounts of disk space).

To enable SQL*Net tracing create or edit the server side sqlnet.ora file and add the following parameters:

If you have ever wanted to wrap PL/SQL code that already exists within the database, then you can do so using the simple procedure below which uses a combination of DBMS_METADATA and DBMS_DDL.

Our reason for doing this was because our code is deployed directly from source control into the database and needs to be wrapped after deployment to stop casual viewing of the source. We didn’t want to have to stop wrapped code in source control, and didn’t have access to the wrap tool from the deployment server.

Whilst developing some administrative functions within PL/SQL I needed to know whether the database that the code was being run in was part of an Oracle DataGuard setup, which you would think would be a trivial exercise.

My first thought was that we could determine this via v$database. If the DATABASE_ROLE column in v$database contains STANDBY then we know it’s a DataGuard configuration… but what if the database role is PRIMARY? how then can we determine if that’s a stand-alone database or a DataGuard primary database?

Next, I thought looking in v$parameter might reveal some useful parameterssuch as dg_broker_config_file.However this only tells us whether a DataGuard configuration has been setup using DataGuard Broker – not whether the configuration is active, or whether it was configured manually (without broker).

My final solution was to query v$archive_dest_status to determine if there were any valid remote destinations. This isn’t foolproof by any means, but covers the most of the likely scenarios. If you have any better solutions then please post them in the comments.

Here’s the query I used within a PL/SQL block:

DECLARE
l_isDataguarded number;
BEGIN
select count(*) into l_isDataguarded from v$archive_dest_status
where status='VALID' and type!='LOCAL';
if (l_isDataguarded>0) then
-- The database is part of a valid dataguard configuration
else
-- The database is standalone
end if; END; /

The SCAN listener that did not have the service registered had an uptime much shorter than the others, meaning that this SCAN listener had failed over fairly recently. This led me to Bug 13066936 (“Instance does not register services when scan fails over”) which was mentioned in DocID 1373350.1 in Metalink/MOS. The problem is fixed in 11.2.0.3.11, 11.2.0.4, 12.1.0.1 and 11.2.0.3 Bundle 24 for Exadata

Solution:

Upgrade the Grid Infrastructure and RDBMS Oracle installations to a fixed version.

Workaround:

The workaround is to make a note of the current value of remote_listener, change it to a null value and then change it back as follows:

col remote_val new_value remote_val
select value remote_val from v$parameter where name='remote_listener';
alter system set remote_listener='';
alter system register;
alter system set remote_listener='&remote_val';
alter system register;

I have written a short kornshell script (below) which will re-register all running instances on a node with the SCAN listeners using the workaround detailed above. If you encounter this bug and need a quick workaround you should run this script once on each node in the cluster.

This is a subject that I am often asked about by developers and support teams changing application configuration details: What is the correct format of JDBC connection strings for the thin driver?

Oracle’s JDBC Thin driver uses Java sockets to connect directly to Oracle. It provides its own TCP/IP version of Oracle’s SQL*Net protocol. Because it is 100% Java, this driver is platform independent and can also run from a Web Browser (applets).

Old syntax, supporting instance (SID) names only::

jdbc:oracle:thin:@HOST:PORT:SID

New syntax, supporting both SID and SERVICE names:

jdbc:oracle:thin:@//HOST:PORT/SERVICE

There are also some drivers that support a URL syntax which allow to put Oracle user id and password in URL.