Yann Neuhaus

It is hot in Europe, it is summer, enjoy, but technology moves fast so you have the chance to already prepare for the next conferences. The IT Tage 2017 will happen the 11th to 14th of December this year and we are happy to be there again.
This event covers a wide range of topics and we will be there again covering and talking about:

I described in an earlier post on AWR views how the dictionary views were using metadata and object links to show information from other containers. But this mechanism cannot work for fixed views (aka V$) because they don’t have their definition in the dictionary.
The big difference is that most of V$ views are available long before the dictionary is opened or even created. Just start an instance in NOMOUNT and you can query the V$ views. Even in multitenant, you can switch to different containers in MOUNT, and query V$ views, when no dictionary is opened.

V$ views query information from the instance and this information pertain to one container:

CON_ID=0 for the CDB itself

CON_ID=1 for CDB$ROOT

CON_ID=2 for PDB$SEED

CON_ID=3 for the first PDB you have created

When you are in root, the V$ views are queried as normal and show all information – from all containers – with their related CON_ID

When you are in a PDB, you must see the objects that belong to your PDB, but not those that belong to other PDBS. But this is not sufficient. For example, you may query the version, and the version is related to the CDB itself, with CON_ID=0:

Then, in a PDB you should see your PDB objects and the CON_ID=0 ones. Oracle needs a new mecanism for that. One way would be to switch to root, query the V$ and filter on CON_ID. We don’t need that. Context switch is there to access data from a different container tablespace, because tablepaces are not shared. But V$ views expose data from the instance, and the instance is shared. Any container can see all rows, and we just want to filter some rows.

An additional predicate (“CON_ID”=0 OR “CON_ID”=3) is added to the view. How is it done? Oracle has a security feature for that: Virtual Private Database – aka Row Level Security – which adds a where clause dynamically.

One way to get more information about virtual private databases is to have an error on its execution and I know that a user with only select privilege cannot EXPLAIN PLAN (see MOS Note 1029064.6).

I connect to a PDB with a low privileged user:SQL> connect scott/tiger@//localhost/PDB1
Connected.

There’s no container switch here, all is running in PDB1 with CON_ID=3 and the internal VPD has added a where clause to filter rows with CON_ID=0 and CON_ID=3

Do not search for the VPD policy name ‘CON_ID’ and function ‘CON_ID’ in the dictionary views because this happens even when the dictionary is not accessible. This is an internal policy used when querying fixed views in multitenant and which probably use some of the VPD code only.

The ODSM is a quite powerful tool for managing the Oracle Unified Directory, and quite a lot of default settings are very reasonable. But there is one that disturbed me from the beginning. It is the ODSM Session Timeout. You might see a message like the following.

Or maybe this one.

It just says, that the page will expire unless a response is received within 2 minutes. Doing some complex searches or similar stuff often takes more than 2 minutes.

So, how can we increase the limit to, let’s say, 2 hours? A value which is more reasonable from my point of view.

In the early days, you had to edit a file called web.xml in a very intuitive directory. And change the session-timeout to whatever value you like. In my example 2 hours.

However, changing the value here has the disadvantage that it might not be permanent. This directory is a temporary cache directory for the WebLogic server. So, if you apply a patch that overwrites the …/odsm/odsm.ear file, the changes you made to web.xml in the temporary cache directory are also overwritten. So, it is not a good long term solution.

Way better is to do it via the WebLogic Console.

Login to your WebLogic Console with the WebLogic user and navigate to “Deployments”. From there select the “/odsm” module.

Now move to the “Configuration” tab and change the Session Timeout (in seconds) to 7200, in case you want a 2h session timeout.

Now save the changes, and click another time OK, to save the Deployment Plan.

That’s it.

Conclusion

The default session timeout is way too short from my point of view. But no worries. Changing it via the WebLogic Console is quite easy and it might save you a lot of headaches.

Application Container brings a new way to share data among databases, and adds a new dimension to referential integrity. A foreign key in an application PDB can reference a row belonging to a root data link table. But then, should a delete on the root validate that there are no orpheans in the application PDBs? And what if those PDBs are closed at the time of this delete? Here is a small example.
If you run this in 12.2.0.1 you will get an error because the search for parent key is done only on the current container. This is considered as a bug: 21955394: CDB:ORA-02291 WHEN FOREIGN KEY REFERS TO THE PRIMARY KEY IN DATA LINK

The example that follows is run with the patch applied to fix this bug.

I’m connecting to root where I have no user PDB yet.
SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.

I insert an EMP row in the application PDB which references a DEPT row in the application root:
SQL> INSERT INTO EMP VALUES
2 (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);

1 row inserted.

As DEPT is and EXTENDED DATA LINK, I can add new rows in my PDB:
SQL> INSERT INTO DEPT VALUES
2 (50,'MY LOCAL DEPT','LAUSANNE');

1 row inserted.

And I can have an EMP row referencing this local parent:
SQL> INSERT INTO EMP VALUES
2 (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,50);
1 row inserted.

SQL> commit;
Commit complete.

This looks good. Now what happens of we delete all rows from DEPT in the application root?
SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
Connected.
SQL> delete from SCOTT.DEPT;
4 rows deleted.

Referential integrity works across containers: an application PDB can reference parent key in the application root (according that bug is fixed). However, no ORA-02292 (child record found) is raised when child records are not in the current container. This one makes sense. Enforcing the verification of child records in all PDBs would require that they are opened, and may require locking the table in all containers. We must be aware that doing DML on the application root can lead to inconsistency if not done correctly.

Note that I’ve defined exactly which rows from DEPT I wanted to delete in the where clause of delete from scott.dept where deptno in (10,20,30,40);
You may be tempted to do something like: delete from scott.dept where deptno in (select deptno from scott.dept);
But keep in mind that the statements you run in the root are re-played as-is in the PDBs. And when you sync the PDB, you can see no rows from DEPT because there were already purged from the root. Actually, what you want is to delete from EMP the rows which refer to the rows you have deleted from the root. It is not possible to get them with a subquery, except if you have stored them into another data link table before deleting them. Changes in the application root must be managed like application patches.

One of the questions that pops up immediately, after you have installed your OUD successfully is how to integrate it into the automatic startup routines of the OS.

My example here show how to do it on Oracle Linux 6. On Oracle Linux 7 it looks a little different. Fortunately, Oracle delivers a script called “create-rc-script”, which can be found in your asinst home directory. It lets you specify the user name under which the OUD will run, the JAVA home and few more stuff. The whole documentation can be found under the following link.

$ cat /etc/oracle-release
Oracle Linux Server release 6.9
$ ./create-rc-script -V
Oracle Unified Directory 11.1.2.3.170418
Build 20170206221556Z
$ ./create-rc-script --help
Create an RC script that may be used to start, stop, and restart the Directory
Server on UNIX-based systems
Usage: create-rc-script {options}
where {options} include:
-f, --outputFile {path}
The path to the output file to create
-u, --userName {userName}
The name of the user account under which the server should run
-j, --javaHome {path}
The path to the Java installation that should be used to run the server
-J, --javaArgs {args}
A set of arguments that should be passed to the JVM when running the server
General Options
-V, --version
Display Directory Server version information
-?, -H, --help
Display this usage information

Take care that you start the “create-rc-script” script from your asinst_1 home, and not from the Oracle_OUD1 home. The reason for that, is that the “create-rc-script” sets the working directory to your current directory. “WORKING_DIR=`pwd`”, and if not started from the correct directory, you might end up with a not working start/stop script.

So .. to do it correctly, switch to your OUD asinst home first and run it from there. I am using here only a few options. The JAVA home, the user name under which the OUD will run and the output file.

The generated start/stop script looks quite complete. The only thing missing is the “status” section which is quite useful from my point of view. To add the status section, we can use the “status” script, which is also part of the OUD installation.

$ ./status --help
This utility can be used to display basic server information
Usage: status {options}
where {options} include:
LDAP Connection Options
-D, --bindDN {bindDN}
DN to use to bind to the server
Default value: cn=Directory Manager
-j, --bindPasswordFile {bindPasswordFile}
Bind password file
-o, --saslOption {name=value}
SASL bind options
-X, --trustAll
Trust all server SSL certificates
-P, --trustStorePath {trustStorePath}
Certificate trust store path
-U, --trustStorePasswordFile {path}
Certificate trust store PIN file
-K, --keyStorePath {keyStorePath}
Certificate key store path
-u, --keyStorePasswordFile {keyStorePasswordFile}
Certificate key store PIN file
-N, --certNickname {nickname}
Nickname of certificate for SSL client authentication
--connectTimeout {timeout}
Maximum length of time (in milliseconds) that can be taken to establish a
connection. Use '0' to specify no time out
Default value: 30000
Utility Input/Output Options
-n, --no-prompt
Use non-interactive mode. If data in the command is missing, the user is
not prompted and the tool will fail
-s, --script-friendly
Use script-friendly mode
--propertiesFilePath {propertiesFilePath}
Path to the file containing default property values used for command line
arguments
--noPropertiesFile
No properties file will be used to get default command line argument values
-r, --refresh {period}
When this argument is specified, the status command will display its
contents periodically. Used to specify the period (in seconds) between two
displays of the status
General Options
-V, --version
Display Directory Server version information
-?, -H, --help
Display this usage information

Take care. Per default, the status utility is an interactive one, and it asks you for the user bind DN and the password. So, the interactive version of that script is not useful for our script.

I can see that nss3 has started as it is the log_archive_dest_3 which is in SYNC now:DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9]
Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA

In the last post we did a click/click/click setup of the PEM server. What we want to do now is to attach a PostgreSQL instance to the PEM server for being able to monitor and administer it. For that we need to install the PEM agent on a host where we have a PostgreSQL instance running (192.168.22.249 in my case, which runs PostgreSQL 10 Beta1). Lets go …

As usual, when you want to have the systemd services generated automatically you should run the installation as root:

In case you missed it: EnterpriseDB has released the beta of Postgres Enterprise Manager 7 beta. When installation is as easy as for the current version that should just be a matter of clicking next, lets see.

Because the installer will create the systemd services installation should be done as root:

Many ways do exist to rotate the listener log, but I was trying to point out some issues, because there are a few, e.g.

What happens when the log file reaches 4G?

What are the performance implications?

What happens if I move the listener.log while the listener is running (Open file descriptor)?

And how to rotate the listener log with minimal impact?

The first two points have been discussed already in the previous post, so, I’m not going deeper into those one’s. Let’s take a look at the other ones, and start with the file descriptor issue. In the output below, you can see that the listener has an open file descriptor “3w” which is pointing to “/u01/app/oracle/network/log/listener.log”. It can be quite easily identified by using the lsof utility (list open files).

As you can see. A new listener.log is not automatically created, and the file descriptor “3w” is pointing now to /u01/app/oracle/network/log/listener.log.1.

That’s why it is not a good idea to move the listener log, without stopping the listener logging first. And please don’t try to send a hang up signal to the listener. The listener does not understand the hang up signal (kill -HUP) and you would kill it immediately.

Ok. Let’s get back to the question about how to rotate the listener log with minimal impact?

If you have activated the listener logging, then there is a reason for doing that and you don’t want to lose data out of that log file. In case that losing any listener log data is not acceptable, you have to stop the listener, rotate the log and start the listener. There is no way out of that. Of course it has the disadvantage that new sessions cannot be established during that time.

However, if you want the listener to be up and running, and rotate the listener log with an absolute minimum of log data loss (taking here about milliseconds), then I would use the approach which I have described already in my previous post.

I have written quickly a listener rotate script which demonstrates how it can look like. I know that the script is not baby save, and a lot of extra checks and tweaks can be built in, but you will get an idea how it can look like. The script takes two parameters. The first one is the listener name, and the second one is the number of days about how long the listener logs should be kept on disk. Everything older than that will be removed.

If you run that rotate listener log script during a time (e.g. in the middle of night), where you expect minimal activity on the DB, you can minimize the chance of losing listener log entries even further.

Nowadays, since the cloud is becoming more and more important, the PL/SQL API’s become more and more important too. Fortunately, Oracle has quite a lot of them. E.g. How do you run a Data Pump export if you have no ssh connectivity to the server? You could use the old exp tool, which is still available even with Oracle 12.2, or you can use DBMS_DATAPUMP. The Data Pump API is quite good documented at in the following books:

Without this role, you might run into the following error when doing a full export:

ERROR at line 1:
ORA-31631: privileges are required
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 6844
ORA-06512: at line 6

3.) Now it’s time to run the Data Pump job. Be aware, that for a consistent export, you need to specify the FLASHBACK_TIME or FLASHBACK_SCN. In my case, I use the FLASHBACK_TIME and set it to the current SYSTIMESTAMP.

5.) Finally, you might want to do some cleanup, in case you don’t need the dump files and the log files anymore. Or you start your export job with the REUSE_DUMPFILES=YES option. This option overwrites the destination dump file if they exist. In case you want to do the cleanup manually, you can use the ULT_FILE package.

The PL/SQL API’s become more and more important, especially in cloud environments. It makes quite a lot of sense, from my point of view to look closer into the one or the other. Especially the DBMS_DATAPUMP is an important one for moving data around.

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:

Primary PRDPDB

On my primary server, I have a CDB1 container database in Standard Edition with one Pluggable Database: PDRDPDB:
21:36:45 SQL> connect sys/oracle@//192.168.78.105/CDB1 as sysdba
Connected.

I need a user there to be able to remote clone from it:
21:36:46 SQL> grant create session, sysoper, dba to C##DBA identified by oracle container=all;
Grant succeeded.

Standby server

On my standby server, I have a CDB1 container database in Standard Edition, where I create a database link to the production CDB using the user created above to connect to it:
21:36:46 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:36:46 SQL> create database link CDB1A connect to C##DBA identified by oracle using '//192.168.78.105/CDB1A';
Database link created.

My standby server runs Grid Infrastructure and has the database created on /acfs which is an ACFS filesystem. We will see the reason later when we will need to create a PDB snapshot copy. Any filesystem where you can use PDB snapshot copy would be fine.

Standby SBYPDB

The creation of the ‘standby’ pluggable database is done with a simple remote clone command and can be run in 12cR2 with the source PRDPDB still opened read write:

The REFRESH MODE is a 12cR2 feature which primary goal is to maintain and refresh a master clone for further provisioning of thin clones. This clone is refreshed every 1 minute, which means that I expect to have at most a one minute gap between PRDPDB and SBYPDB data, with the additional time to apply the 1 minute redo, of course.

Activity on the source

I will simulate a crash of the primary server and a failover to the standby, when transactions are running. I’ll run this activity on the SCOTT.EMP table:
21:39:03 SQL> connect scott/tiger@//192.168.78.105/PRDPDB;
Connected.

The datafiles are up to date, with a maximum 1 minute gap and all I want is open it and have the application re-connect to it. However a refreshable clone can be opened only read-only. This makes sense: you cannot apply more redo from source once opened read-write. So my first idea was to stop the refresh mode:
21:41:45 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:41:45 SQL> alter session set container=SBYPDB;
Session altered.

It seems that Oracle tries to do one last refresh when you stop the refresh mode, but this fails here because the source is not accessible. I think that it should be possible to open read-write without applying more redo. However, these refreshable clones were not designed for failover.

I hope that one day we will just be able to end refresh mode without connecting to source, accepting to lose the latest transactions.

Open Read Only

Without an access to the source, I stay in refresh mode and I can only open read only:21:41:45 SQL> alter pluggable database SBYPDB open read-only;
Pluggable database altered.

My data is there, with my less than one minute gap, but that’s not sufficient for me. I want to run my application on it.

Snapshot Clone

My first idea to get the PDB read write on the standby server is to clone it. Of course, the failover time should not depend on the size of the database, so my idea is to do a snapshot copy, and this is why I’ve setup my standby CDB on ACFS. Here I’m cloning the SBYPDB to the same name as the primary: PRDPDB
21:41:47 SQL> alter session set container=CDB$ROOT;
Session altered.

I’m running on a snapshot here. I can stay like that, or plan to move it out of the snapshot in the future. There is no online datafile move in Standard Edition, but there is the online pluggable database relocate. Anyway, running the database in a snapshot is sufficient to run a production after a Disaster Recovery and I can remove the SBYPRD so that there is no need to copy the ACFS extents on future writes.

Keep the snapshot

At that point, you should tell me that I cannot snapshot copy a PDB within the same CDB here because I’m in Standard Edition. And that’s right: you can create only one PDB there and you are supposed to get a ‘feature not enabled’. But I was able to do it here in my lab, with a small trick to inverse the CON_ID sequence:
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

Remote snapshot clone should be possible as well. But there’s another licensing issue here. Using ACFS snapshots for the database is not allowed in Standard Edition. This means that this solution probably requires another snapshot solution than the one I’m using here in my lab.

If you don’t fear to violate the single-tenant rules, you may prefer to keep the SBYPRD for a while. Imagine that you are able to restart the crashed server for a few minutes, then you can do the last refresh of SBYPRD to have a look at the transactions that were lost in the 1 minute window.

Be careful not to have jobs or services starting here because your production is now on the snapshot clone PRDPDB running on the same server. Let’s open it:
21:43:02 SQL> alter pluggable database SBYPDB open restricted;
Pluggable database altered.

And here we are with the data at the moment of the crash. Then, the application owner can manually check what was missed between the last refresh (which made its way to PRDPDB) and the crash (visible in SBYPDB).

Unplug/Plug

I was not very satisfied by the snapshot clone because of the limitations in Standard Edition, which is where this solution may be interesting. I have the datafiles but cannot open the SBYPDB read write. I tried to unplug them but cannot because of the refresh mode:
SQL> alter pluggable database SBYPDB unplug into '/tmp/tmp.xml';

Here it is. This takes a bit longer than the snapshot solution but still ready to activate the ‘standby’ PDB without copying datafiles.

So what?

All the new 12cR2 multitenant features are available in all Editions, which is very good. Here with ALTER PLUGGABLE DATABASE … REFRESH we have log shipping and apply, for free in Standard Edition, at PDB level. And I’ve tested two ways to open this standby PDB in case of disaster recovery. I’m using only supported features here, but be careful that those features were not designed for this goal. The normal operations on refreshable clone require that the remote CDB is accessible. But there are workarounds here because you can describe/drop/plug or snapshot clone from a PDB that you can open read only.

No, Kafka is not only the famous author (en.wikipedia.org/wiki/Franz_Kafka), it’s an open-source distributed pub-sub messaging system with powerful skills like scalability and fault tolerance. It’s also a stream processing platform (near real-time) for the streaming datasources. The design of Apache Kafka is strongly influenced by the commit logs. Apache Kafka was originally developed by Linkedin and was subsequently open sourced in early 2011.

The installation is pretty simple but need to be rigorous .

Binaries installation

Prerequisites
Get a Linux server (I have chosen Centos 7.3.1611), it could run on a small config. (memory 1G min.)
Connect as a sudo user or root

The Confluent Platform is an open source platform that contains all the components you need
to create a scalable data platform built around Apache Kafka.
Confluent Open Source is freely downloadable.
Install the public key from Confluent

Ok , the binaries are installed now. The next operation will be to configure and launch Zookeeper and Kafka itself !

First , take a look at the Zookeeper configuration :

[root@osboxes kafka]# cat /etc/kafka/zookeeper.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0

Don’t change the configuration file (the default values are okay to start with) and launch Zookeeper

Et voilà ! the messages produced with the producer appeared now in the consumer windows. You can type a new message in the producer console , it will display immediately in the other terminal.If you want to stop all the consoles , you can press Ctrl-C.

Now the most difficult thing is still to be done, configure Kafka with multiple producers / consumers within a complex broker topology.

Ensuring data availability is an important part of the high availability design. SQL Server AlwaysOn features including SQL Server FCIs and availability groups address some aspects of the problem but we may also rely on the online operations features to maximize data availability. Indeed, for some high-critical workloads, offline operations during the maintenance phase are not permitted and may contribute to call into question the entire architecture.

But have you ever faced an outage from your primary replica leading to a failover to the secondary replica during the database maintenance timeline? Yes, your architecture is designed to behave in this way for sure but let’s say you were in the middle of a rebuild index operation concerning a big table and everything is rolled back now. In this case, you will probably have to wait the next windows maintenance time to initiate another rebuild operation, but from the start …

Fortunately, the new Resumable Online Indexes feature from SQL Server 2017 will come to the rescue and seems promising to address such situation. Some limitations are still around and well-documented in the Microsoft documentation but I guess Microsoft will work hard to remove them in the future.

Let’s play a little bit with the Resumable Online Indexes on my AG environment that includes two replicas with cluster_type = NONE meaning we don’t rely on an underlying orchestrator to handle automatic failover of the AG resources. In this context, it will be sufficient to demonstrate how Resumable Online Indexes work.

My initial configuration is as follows:

I used the AdventureWorks2016 database with a bigTransactionHistory table generated from the Adam Machanic script.

By using this option, I made the rebuild phase “resumable” and I’m now able to perform additional actions like pausing or aborting the index rebuild operation. I may also use MAX_DURATION parameter to setup the duration (in minutes) of the operation before to pause it. We may easily identify resumable sessions which are running on the SQL Server instance by using the sys.dm_exec_requests DMV and new is_resumable column.

Let’s initiate a manual failover while the index is rebuilt by SQL Server on the primary replica. Then during the outage of the first primary replica, I faced the following error message:

A typical error message regarding the situation … but the most interesting part comes now. If we go through the new sys.index_resumable_operations DMV, we may notice our rebuild index operation is paused.

percent_complete : the percent complete value at the moment of pausing the operation

page_count = size of the second index structure at the moment of pausing the operation

last_pause_time = it is self-explanatory

last_max_dop_used = max dop value used during the last rebuild operation. It is interesting to notice we may change this value for the same operation between pausing sessions.

Before resuming the index operation, let’s have a brief look at the transaction log used space. I asked myself some questions about the transaction behavior when the index operation is paused: Does a pause state have an impact on the transaction log? Do the VLFs touched by this operation remain active until the index is completely rebuilt? Let’s answer to this question by the following test.

A first look at the transaction log space used says that the transaction log contains records that must be backed-up.

Well, pausing an index operation doesn’t rely on a remaining active transaction that may impact the transaction log retention but after a few thoughts, it seems to be obvious it works in this way.

What about storage? The resumable option is available only with online index operations meaning SQL Server must maintain the corresponding hidden structures as long as it is necessary to rebuild the underlying index. According to the BOL:

No extra resources are required for resumable index rebuild except for◦Additional space required to keep the index being built, including the time when index is being paused

Obviously, maintaining such structure may have a huge impact on your workload regarding your context. Out of curiosity, I went through the system view to see those hidden structures as follows:

select
o.name as table_name,
i.name as index_name,
p.index_id,
au.type_desc,
SUM(au.used_pages) as total_used_pages,
SUM(p.rows) as total_rows,
COUNT(*) as nb_partitions
from
sys.allocation_units as au
join
sys.partitions as p on au.container_id = p.hobt_id
join
sys.objects as o on o.object_id = p.object_id
left join
sys.indexes as i on i.object_id = p.object_id
and i.index_id = p.index_id
where
p.object_id = object_id('bigTransactionHistory')
group by
o.name, i.name, p.index_id, au.type_desc
order by
o.name, p.index_id;
go

In regard to the second record, a simple math – 9261 * 100. / 190150 – confirms the structure is only 4.8 percent in size of the underlying cluster index. We may retrieve this result from the first output of the new sys.index_resumble_index system view. For the third one, my guess is it corresponds to the temporary mapping index used by SQL Server to determine which records to delete in the new indexes that are being built when rows in the underlying table are updated or deleted.

Here is an execution plan sample of an update query against the bigTransactionHistory table. We may notice DELETE / UPDATE operations to a “hidden” nonclustered index related to this special index highlighted in red.

Go ahead and let’s resume the pending index operation by using the new RESUME option. We may also add the MAX_DURATION option to guarantee the rebuild operation will go back to a pause state if we consider that reaching the maintenance windows limit cannot hurt the daily business workload in terms of resource for instance. An option we will definitely consider to add it with the next version of our DMK maintenance tool for SQL Server.

The listener log file contains a lot of very useful information, like the program which was used for the connection, the IP address where the connection is coming from, the OS user which was used on the client and many many more.

However, it does not contain only successful information; it also shows when connections have been rejected because of TCP.VALIDNODE_CHECKING or any type of TNS errors. So, why not using it for auditing? And what about the performance overhead of listener logging.

Let’s start first with the performance overhead. I am doing 4 types of tests here.

1. Performance with no listener logging (100 connections)
2. Performance with a small listener.log (100 connections)
3. Performance with big listener.log, close to 4G (100 connections)
4. Performance with a full listener.log, exactly 4G (100 connections)

As you can see in the results, sqlplus connections without listener logging are the fastest one, and the bigger the file gets, the slower the connections are. But wait a second. What’s going on with test 4? As soon as the listener log is full, connections are faster again. The reason for test 4 being faster again, is that the listener is not logging anymore. As soon as it reaches the 4G limit, which is still the case with Oracle 12.2, the listener does not crash like in some older versions beforehand, but the logs are going to /dev/null. So, I will lose all my auditing information for those ones after the 4G limit was reached.

How do we overcome this issue? The answer is to rotate the listener log. But how do we do it? We can’t just simply move the old listener log away and create an empty new one, because the file descriptor is still open and you would create a huge mess.

We could stop the listener, rotate the listener log and start the listener again. A little better, but for the time the listener is stopped, no connection will be possible which is also not a good idea.

From my point of view, the best solution is to stop listener logging online as soon as it hits 1G, rotate the listener log and start listener logging again, like in the following example:

Now you have the advantage, that the log rotation is an online operation, and you don’t create any mess with open file descriptors. And if you rotate the listener log before it reaches 1G, it is also less likely to run into performance issues.

Conclusion

Take care of your listener log, so that it does not hit the 4G file size limit. You might lose very important information which will not be logged anymore. And do the listener log rotation correctly.

12.1 introduced Unified Auditing where you define policies and then enable them. As with the traditional audit, you enable them for all users or for specific users. The unified auditing adds a syntax to audit all users except some listed ones. 12.2 adds a syntax to audit a group of users, defined by the role granted. This is the best way to enable a policy for a group of users, including those created later.
I create a simple policy, to audit logon and DBA role usage:
SQL> create audit policy DEMO_POLICY actions logon, roles DBA;
Audit POLICY created.

Let’s connect with this user and see what is audited:
SQL> connect USER1/covfefe@//localhost/PDB1
Connected.

SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;

The logon and the select on dictionary table (possible here thanks to the DBA role) has been audited because the policy is enabled for this user.

We have a new DBA and we create a new user for him:
SQL> create user USER2 identified by covfefe quota unlimited on USERS;
User USER2 created.
SQL> grant DBA to USER2;
Grant succeeded.

He connects and check what is audited:
SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;

Nothing is audited for this user. The DBA role usage is audited, but only for USER1.

Of course, we can add an audit statement for each user created for a DBA:
SQL> audit policy DEMO_POLICY by USER2;
Audit succeeded.

Then his new activity is audited:
SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;

The important thing is that a newly created user will be audited as long as he has the DBA role directly granted:
SQL> create user USER3 identified by covfefe quota unlimited on USERS;
User USER3 created.
SQL> grant DBA to USER3;
Grant succeeded.

This policy applies to all users having the DBA role, and gives the possibility to audit more than their DBA role usage: here I audit all login from users having the DBA role.

So what?

We don’t use roles only to group privileges to grant. A role is usually granted to define groups of users: DBAs, Application user, Read-only application users, etc. The Unified Auditing can define complex policies, combining the audit of actions, privileges, and roles. The 12.2 syntax allows enabling the policy to a specific group of users.

When reading about new version numbering in SQL Developer, I took the risk to change the title and guess the future version number for Oracle Database: Oracle 17.3 for the July (Q3) of 2017. Of course, this is just a guess.

Updated June 10th

Confirming the previous guess, we start to see some bugs planned to be fixed in version 18.1 which is probably the January 2018 Release Update.

News from DOAGDB17 – May 30th

Oracle Database software comes in versions: 10g in 2004, 11g in 2007, 12c in 2013
In between, there are releases: 10gR2 in 2005, 11gR2 in 2009, 12cR2 in 2017
In between, there are Patch Sets: the latest 11gR2 is 11.2.0.4 (2 years after 11.2.0.3) and 12cR1 is 12.1.0.2 (one year after 12.1.0.1)
Those are full install: full Oracle Home. It can be in-place or into a new Oracle Home but it is a full install. There are a lot of changes in the system dictionary and you run catupgrd.sql to update it. It takes from 30 minutes to 1 hour on average depending on the components and the system.

Their primary goal is to release features. You should test them carefully. For example, the In-Memory option came in the first Patch Set of 12cR1

This will change in 2017 with annual feature releases. Well, this looks like a rename of Patch Set.

All releases are supported several years, with fixes (patches) provided for encountered issues: security, wrong result, hanging, crash, etc. Rather than installing one-off patches, and requesting merges for them, Oracle supplies some bundle patches: merged together, tested as a whole, cumulative, with a quarterly release frequency. Depending on the content and the platform, they are called Bundle Patches (in Windows), CPU (only security fixes), SPU (same as CPU but renamed to SPU), PSU (Patch Set Update), Proactive Bundle Patch (a bit more than in the PSU)…
The names have changed, the versioning number as well as they now include the date of release.
You apply patches into the Oracle Home with OPatch utility and into the database dictionary with the new datapatch utility.

Their primary goal is to get stability: fix issues with a low risk of regression. The minimum recommended is in the PSU, more fixes are in the ProactiveBP for known bugs.

This will change in 2017 with quarterly Release Updates. Well, this looks like a rename of PSU to RUR (Release Update Revision) and a rename of ProactiveBP as RU (Release Update).

The goal is to reduce the need to apply one-off patches on top of PSUs.

Here is all what I know about it, as presented by Andy Mendelsohn at DOAG Datenbank 2017 keynote:

It is not common to have new announcements in the last fiscal year quarter. Thanks DOAG for this keynote.

In a previous post I introduced the new 12cR2 feature where some DDL operations can use the same rolling invalidation than what is done with dbms_stats. On tables, DDL deferred invalidation is available only for operations on partitions. Here is how it works for partition exchange.
Here is my session environment:
SQL> whenever sqlerror exit failure
SQL> alter session set nls_date_format='hh24:mi:ss';
Session altered.
SQL> alter session set session_cached_cursors=0;
Session altered.
SQL> alter session set optimizer_dynamic_sampling=0;
Session altered.
SQL> alter system set "_optimizer_invalidation_period"=5;
System SET altered.
SQL> show user
USER is "DEMO"

The CREATE TABLE FOR EXCHANGE do not create the indexes, but for rolling invalidation we need them. Without the same indexes, immediate invalidation occurs.

In order observe invalidation, I run queries on the partitioned tables, involving or not the partition I’ll exchange. I also run a query on the table used for exchange.
SQL> SELECT * FROM DEMO partition (P1);
no rows selected
SQL> SELECT * FROM DEMO partition (P2);
no rows selected
SQL> SELECT * FROM DEMO;
no rows selected
SQL> SELECT * FROM DEMOX;
no rows selected

Here are the cursors and some execution plans:
SQL> select sql_id,sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;

If I do the same without the DEFERRED INVALIDATION clause, or without including indexes, or having different indexes, then I would see all cursors invalidated. Here only the select on the DEMOX table is invalidated:

SQL> select sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;

SQL> select sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;

I expected to see IS_ROLLING_INVALID going from ‘Y’ to ‘X’ here when the random time is set for invalidation.

By default, the random time is set within a 5 hours window, but I changed “_optimizer_invalidation_period” to 5 seconds instead and I wait for this time window to be sure that invalidation occurs. And then run my queries again.

Here are the new child cursors created for the ones that were marked for rolling invalidation. The IS_ROLLING_INVALID did not display anything, but it seems that it works as expected:

SQL> select sql_text,child_number,invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time,is_rolling_invalid from v$sql where sql_text like 'S%DEMO%' order by sql_text;

The first observation is that deferred invalidation works with partition exchange, despite the fact that the V$SQL.IS_ROLLING_INVALID flag is not updated. I was surprised to see that rolling invalidation occurs even for the cursors accessing to the partition which was exchanged. However, the rolling invalidation occurs only if the indexes are the same. If we do not exchange the indexes, then all cursors are invalidated immediately. This means that probably the cursor parsed is compatible to run after the exchange as the indexes are guaranteed to have same structure, type, compression,…
This is a very nice feature when exchange partition is used to keep the fact table when loading new data: you load into a table and then exchange it with the latest partition. The new values are now exposed immediately and this new feature avoids a hard parse peak.

When cloning a PDB from a remote CDB you need to define a database link to be used in the CREATE PLUGGABLE DATABASE … FROM …@… command. The documentation is not completely clear about the privileges required on the source for the user defined in the database link, so here are the different possibilities.

Remote clone

Here is what the documentation says:

So you can connect to the CDB or to the PDB.

In order to connect to the CDB you need a common user with the CREATE SESSION system privilege:
SQL> create user C##DBA identified by oracle;
User C##DBA created.
SQL> grant create session to C##DBA container=current;
Grant succeeded.

No need for CONTAINER=ALL here because you connect only to the CDB$ROOT.

Then you need the CREATE PLUGGABLE DATABASE system privilege on the PDB. You can grant it from the CDB$ROOT with the CONTAINER=ALL but it is sufficient to grant it locally on the source PDB:
SQL> alter session set container=PDB1;
Session altered.
SQL> grant create pluggable database to C##DBA container=current;
Grant succeeded.

Note that, not documented, but the SYSOPER administrative privilege can replace the CREATE PLUGGABLE DATABASE so we can run the following instead of the previous one:
SQL> alter session set container=PDB1;
Session altered.
grant sysoper to C##DBA container=current;
Grant succeeded.

Then which alternative to use? The choice of the common or local user is up to you. I probably use a common user to do system administration, and cloning is one of them. But if you are in a PDBaaS environment where you are the PDB administrator, then you can clone your PDB to another CDB that you manage. This can mean cloning a PDB from the Cloud to a CDB on your laptop.

PDB Relocate

Things are different with the RELOCATE option where you drop the source PDB and redirect the connection to the new one. This is definitely a system administration task to do at CDB level and requires a common user. Trying it from a database link connecting to a local user will raise the following error:
ORA-17628: Oracle error 65338 returned by remote Oracle server

65338, 00000, "unable to create pluggable database"
// *Cause: An attempt was made to relocate a pluggable database using a
// database link to the source pluggable database.
// *Action: Use a database link that points to the source multitenant container
// database root and retry the operation.

The documentation mentions that you can use either SYSDBA or SYSOPER, but from my tests (and Deiby Gómez ones) only SYSOPER works without raising an ‘insufficient privileges’. The documentation mentions that CREATE PLUGGABLE DATABASE is also necessary. Actually, it is not. And, with a relocate, it cannot be an alternative to SYSOPER. The user must be a common user, the CREATE SESSION must be granted commonly, but the SYSOPER can be locally for the PDB we relocate.

In summary

To clone a remote PDB you can use a common or local user, with SYSOPER or CREATE PLUGGABLE DATABASE privilege. To relocate a PDB you need a common user with SYSOPER.