Yann Neuhaus

I do have a Data Guard environment, where I have configured the RMAN DB_UNIQUE_NAME persistent setting for my primary and the standby. With the RMAN DB_UNIQUE_NAME settings I am able to run reports my Oracle Data Guard environment from any database. I could e.g. list all archivelogs for SITE1 from SITE2 or the other ways around.
Or I could show all persistent settings for SITE1 from SITE2 and of course the other way around. The only prerequisite for this feature is the RMAN catalog. In case you are not connected to the RMAN catalog you end up with the following error:

RMAN says that the target database name is ambiguous. But what does this mean. Let’s take a look a the RMAN error with the oerr utility. The oerr utility can not only be used with “ORA” error codes like “oerr ora 01555″, but also with “RMAN” error codes.

$ oerr rman 20005
20005, 1, "target database name is ambiguous"
// *Cause: two or more databases in the recovery catalog match this name
// *Action:
//

Ok. This error is much more precise. Looks like that RMAN found more the one database called DBIT121 in the catalog, and so RMAN does not know, on which DBID to perform the requested command. Ok. So let’s connect to the RMAN catalog and check if this is really the case.

Indeed. I do have two different DBID’s pointing to the same DB_NAME. Kinda confusing for RMAN. But which one is the one that have been backed up. We could query the RC_BACKUP_SET and RC_BACKUP_PIECE views to find that out.

The RMAN DB_UNIQUE_NAME persistent setting is a quite cool feature. This is something I would really recommend when working with RMAN and Data Guard. It allows you to do actions on primary from the standby or the standby from the primary. It doesn’t matter. But take care that you don’t have multiple DBID’s pointing to the same DB in your RMAN catalog.

Last week at a customer the following question came up: We have a varchar2 column in Oracle that contains a date but actually is stored as a string. When we want to convert this column to be a real date data type in Oracle we can use dbms_redefinition. How can we do that in PostgreSQL? The first answer that came to me mind is: Add a new column (of data type date) and populate it with the converted string from the source column, drop the source column and rename the new column to the name of the dropped column. This for sure will work but it is not the most convenient way: What happens if the application is somehow dependent on the order of the columns? Shouldn’t be the case if the application is written well, but you never know.

Perfect. The remaining question is: Does this operation block others from reading the table while it is being executed? Lets generate some more date to make the select operation a bit longer and then “watch” the statement in a separate session while we execute the conversion:

In session 1 you’ll notice that the statement is blocked and finally you get this:

Time: 0.453 ms
ERROR: date/time field value out of range: "01012017"
LINE 1: select count(*) from t1 where a = '01012017';
^
HINT: Perhaps you need a different "datestyle" setting.

Conclusion: Converting a column from one data type to another is nothing you want to do when you system is live. When you can afford some downtime the “using” syntax is what you want to do and it is working quite fast. Hope this helps …

Materialized views is a very old feature (you may remember that it was called snapshots a long time ago). It has all advantages of a view, as you can define any select statement that joins, filters, aggregates, and see it as one table. It has all advantages of a table, as it is stored in one segment, can be indexed, partitioned, have constraints, be compressed, etc. It looks like indexes as it stores data redundantly, in a different physical way, more focused on the way it will be queried rather than on the way data is entered. Like indexes, they can be used transparently (with query rewrite) but unlike indexes, they are not maintained synchronously but have to be refreshed. It has some advantages of replication because it can capture the changes done on source tables, into materialized view logs, so that refresh can be incremental (fast refresh).
Oracle Database 12.2 goes a step further being able to deliver fresh result even when the materialized is stale. This is an amazing feature called real-time materialized view, that does on-query computation of fresh result from the stale one, joined with the materialized view log.
I create my DEMO table on Oracle Exdata Express Cloud Service
SQL> create table DEMO (id primary key,a,b) as select rownum,round(log(10,rownum)) a, rownum b from xmltable('1 to 100000');
Table created.

I plan to create a materialized view to aggregate the count and sum of B grouped by A. And DBMS_MVIEW can tell me what I need to be able to fast refresh it.

Explain Materialized View

The goal is to have real-time materialized view with frequent refreshes, which means that we need fast refresh to be possible after any kind of modification.

CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST_AFTER_ONETAB_DML COUNT(*) is not present in the select list
REFRESH_FAST
REFRESH_FAST_AFTER_INSERT the detail table does not have a materialized view log PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

Here is what I have to do in order to have a materialized view that can be fast refreshed: COUNT(*) in the select, and create a materialized view log.

CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST
REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_AFTER_INSERT mv log must have ROWID PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_INSERT mv log must have new values PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_INSERT mv log does not have all necessary columns PDB_ADMIN.DEMO
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view

Ok, now I’m ready to create the materialized view. The only remaining message is for partitioned tables.

Of course, here you can’t see that the result is stale, because I inserted a row with value 0 which do not change the sum. Let’s do a count the rows, which is something that is also aggregated in my materialized view. I have the option to disable the rewrite and query the source table:

SQL> select /*+ no_rewrite */ count(b) from DEMO;

COUNT(B)
----------
100001

This is the accurate result, but with access to full table.

The rewrite can also be forced by hint (because it is a cost decision)

SQL> select /*+ rewrite */ count(b) from DEMO;

COUNT(B)
----------
100000

Stale result here: I don’t see the latest modifications.

Frequent refresh

In order to limit the gap between fresh data and stale result, you can refresh the materialized view frequently. It’s not too expensive thanks to the materialized view log: fast refresh is incremental.

So, we now read the materialized view but this will last only while there is no updates on the table. So the idea is to trigger a refresh as soon as there are modifications. Ideally it should be like indexes, which are maintained automatically. But indexes are much simple: it’s a simple value to rowid mapping entry to maintain. And rowids do not change. Materialized views have joins, aggregates and contains all columns.

Refresh on commit

So the idea is to defer the maintenance of the materialized view to commit time. This is the latest point where we are required to do it as we want other sessions to never see stale results. And materialized view logs are there to store the incremental changes even if the transaction is very long. Of course, we need to be aware of it because in general the commit is an immediate and simple operation.

Let’s define the materialized view to refresh on commit instead of on-demand

With on commit refresh, the materialized view is never stale. The problem is that it can slow down the transactions: in addition to fill the materialized view logs, the commit has the overhead to apply them. In 12.1 this is the only way to have a query on the materialized view that is always fresh. But there’s something new in 12.2.

Real-time materialized views

Even when the materialized view is stale, we can get fresh result without querying the source tables. We have the stale values in the materialized view and we have all changes logged into the materialized view log. Easy or not, merging that can be computed to get fresh result. We still need fast refresh but we don’t need refresh on commit anymore:

Still no magic here. For the session that did the modifications, it seems that query rewrite cannot happen. All changes are in the materialized view log, but applying the uncommited ones for my session seems to be impossible here. Well, let’s commit my changes.

We got it. All the magic. The materialized view is read. The materialized view log is read. But we don’t need the source tables. All this is merged by outer join and union all. The plan is harder to read but it requires only 25 logical reads to get fresh results instead of 270 from the source table. The bigger the tables are, the more complex the query is, the more benefit you get as long as you don’t have too many changes since the last refresh. And this without any overhead on other transactions commits. That’s the beauty of 12cR2 Enterprise Edition. Can you imagine you have to code this yourself? For any query? For any modifications on source tables?

FRESH_MV

This was query rewrite: query the source table and have the CBO transform the query to query the materialized (given that the CBO costing estimates that it is cheaper). But you can also query the materialized view and ask to get fresh result by joining materialized view log to the stale result. And this can be used also in Standard Edition (only query rewrite is limited to Enterprise Edition). On-query computation when querying the materialized vue is enabled by the FRESH_MV hint:

Have you seen that we need even less logical reads (12) than before (25). There is an optimization here with RESULT CACHE. You get this when you have the sequence in the materialized view log, and you can see that the sequence is used in the predicates:

Of course, you also see a predicate with the staleness timestamp (here 2017-02-16 20:31:08) of the materialized view.

This result cache is interesting because the materialized view log is read several times in the execution plan and this is a way to actually read it only once.
SQL> select type,column_count,row_count,cache_id,name from v$result_cache_objects;

The result cache has a dependency on the materialized view log, to be aware of additional changes, and when tracing the transformed query, we can see a lifetime of session for this result cache. /*+ RESULT_CACHE(LIFETIME=SESSION, NAME=”DMLTYPES:MLOG$_DEMO”) */. Note that I included the sequence in the materialized view log, but this is not required. I’ll show in a future post that the execution plan is different then, and not using result cache.

So what?

This is an amazing feature. You can optimize your queries transparently by creating materialized views, get fresh result, and minimize the refresh overhead. And depending on the size of the tables and the rate of modifications. You can choose the right refresh frequency with the goal to limit the materialized view logs to apply on each query. You have real-time result and bulk refresh at the same time. Oracle Database has always been a database for mixed workloads, where readers don’t block writers. And once again we have a feature to optimize queries by pre-calculating them, with minimal impact on source.

It is transparent, but after this first test, I have a few questions that raise and that I’ll try to answer in future posts: Is it always better to have the sequence in the materialized view log? Is the default result cache size still sufficient? How can it use a 1 seconds only precision timestamp and not a SCN? What happens with this at winter Daylight Saving Time clock change? Can we get query rewrite when our own transaction has made the modifications? Do we need to invalidate cursors that read the source table? How accurate are the cardinality estimations on the very volatile materialized view? When full materialized view log is read, can it trigger a complete refresh?

If you think that Flashback Drop feature just brings back your table, then this is only half of the story. It does much more than that. Besides undropping the table, it also brings back your constraints, your indexes, your trigger, your grants and the statistics as well.

The ugly part is, that the flashback drop brings back some strange object names e.g. your indexes and constraints with names like “BIN$…” or alike. Maybe something you don’t want. So why not combining the Flashback Drop with a Flashback Query on the Dictionary to get the old constraint and index names.

Let’s setup a few objects in the SCOTT schema. But before we do that, we need to grant the user SCOTT some extra privileges.

Now we can setup our objects for this test. I will create 2 tables, and few grants, a trigger and statistics. The goal is to have after the flashback to before drop, exactly the same object names afterwards for the table the index, the constraints and the trigger.

Everything looks good. Up to date statistics, trigger is enabled and no objects with “BIN$xx” or something. The next step is a quite important one for this demo. I am just saving the SCN number before the “drop table” into a variable. In the real world, you need to find the SCN number yourself, e.g. with the TIMESTAMP_TO_SCN function.

The trick is now to invoke a Flashback Query on the dictionary. Flashback query on the dictionary is not 100% supported, but it works. I just save the current index name into the variable “I” and the old name into variable “OI”.

The Flashback Drop feature does not just bring back your table. It does much more, it brings back your grants, the trigger, the statistics, the indexes and the constraints as well. If you are lucky, you can even combine it with the Flashback Query to retrieve your old names for the indexes, constraints and triggers.

This is a follow up to the Blog were I explained how to disable IPv6 on Oracle Linux 7.

If you have done all the steps which I have explained here http://blog.dbi-services.com/oel-7-how-to-disable-ipv6-on-oracle-linux-7/ then you have already IPv6 successfully disabled. However, some tools require some special attention afterwards if you want to avoid some ugly warning or error messages. There are so many tools that can use IPv4 and IPv6, but it is impossible to mention all of them. I will just dig a little deeper into the following 4.

Postfix

Oracle

NFS

rsyslogd

Postfix

Let’s start with Postfix. This might be one of the first warning messages you see, in case you have disabled IPv6 on your system. If you receive the following warning message when you try to send an email, then you need to adjust your /etc/postfix/main.cf file.

$ mailx -s "Test" xxx.xxx@xxx.com
Test
.
EOT
$ send-mail: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
send-mail: warning: inet_protocols: configuring for IPv4 support only
postdrop: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
postdrop: warning: inet_protocols: configuring for IPv4 support only

The solution is to configure your /etc/postfix/main.cf file to allow only the ipv4 protocol.

The next candidate is the Oracle Listener. In some situations, you might see the following error message in your listener.log file when working with Cloud Control 12c.

TNS-01189: The listener could not authenticate the user

This is related to an Oracle bug, to be more precise, it is “BUG 16054202 – TNLIN EXTRACTS WRONG SUBNETMASK FOR IPV6 ADDRESSES”. The bug can be fixed by configuring the Oracle Listener to work with IPv4 only. This is done via the listener.ora IP parameter, which knows the following options.

IP=FIRST

Listen on the first IP address returned by the DNS resolution of the host name.
If the user wants the listener to listen on the first IP to which the specified host name resolves,
then the address must be qualified with (IP=first).

IP=V4_ONLY

Listen only on IPv4 addresses.

IP=V6_ONLY

Listen only on IPv6 addresses.

Simply put the (IP=V4_ONLY) after your PORT setting, and then restart the listener like shown in the following example.

Now you could either remove the proto option or change it to proto=tcp.

For NFS version 4 you have the following options:

proto=netid The netid determines the transport that is used to communicate with the NFS server. Supported options are tcp, tcp6, and rdma. tcp6 use IPv6 addresses and is only available if support for TI-RPC is built in. Both others use IPv4 addresses.

In my case, I have added the proto=tcp option to my NFS mount table in the /etc/fstab

In my previous blog post, I talked about SQL Server on Linux and high availability. During my test, I used a NFS server to share disk resources between my cluster nodes as described in the Microsoft documentation. A couple of days ago, I decided to add a fourth node (LINUX04) to my cluster infrastructure and I expected to do this work easily. But no chance, I faced a problem I never had before on this infrastructure.

Switching over this last node led to a failed SQL Server FCI resource. After digging into the problem, I found out the root from the SQL Server error log as shown below:

According to the output above we may claim this is a mismatch issue between uids/guids of the mssql user across the cluster nodes. At this stage, I remembered performing some tests including creating some linux users before adding my fourth node leading to create a mismatch for the mssql user’s uids/gids. Just keep in mind that the SQL Server installation creates a mssql user by default with the next available uid/gid. In my case uid and guid.

Ok this explains why I faced this permission issue. After investing some times to figure out how to get rid of this issue without changing the mssql user’s uid/guid, I read some discussions about using NFS4 which is intended to fix this uids/gids mapping issue. It seems to be perfect in my case! But firstly let’s just confirm I’m using the correct NFS version

The problem starts when I try to access the database files despite the correct mapping configuration. I spent some time to understand that some misconceptions about how NFSv4 and magic mismatch uids/gids fix subsist. I admit the main documentation is not clear about it but please, feel free to comment if it is not the case. After digging into further pointers, I was able to understand that NFS itself doesn’t achieve authentication but delegates it down to the RPC mechanism. If we take a look down at the RPC’s security, we may notice it hasn’t been updated to support such matching. Basically, it continues to use the historic authentication called AUTH_SYS meaning sending uids/gis over the network. Translation work comes later through the idmap service. The only way to get rid of this issue would be to prefer another protocol like RPCSEC_GSS which includes authentication based on LDAP or Kerberos for example.

The bottom line here is that SQL Server on Linux is not an exception of course. If we want to continue using basic Unix authentication, keeping synchronizing uids and guids across my cluster nodes seems to be a good way to go. Using Kerberos authentication in this case? This is another story that I will try to tell in another blog post!

Today, I will install the Container feature, install Docker and deploy a container.
First of all, I need to enable the Container and the Hyper-V feature, take care if you use VirtualBox because after having enable Hyper-V this won’t work anymore.
Windows Server 2016 support only Hyper-V container and no more Windows Container. So I will check if those features are already enable on my server and if it is not the case enable its, don’t forget to map the Windows Server 2016 iso file to your Virtual Machine.
To do it just run this PowerShell cmdlet:
Now, both features are installed in my VM and I can install Docker.
To do so I will download the Docker Engine and Client from the Docker project library here to the folder c:\Temp and unzip the file into c:\ProgramFiles:
I have my Docker folder with Docker executable files and binaries. Dockerd.exe for the Docker engine and docker.exe for the client:
I add also the Docker path to the path environment variable:
Optionally, we could add it forever:
I will install Docker as a Service and start it:
Docker is installed and started, I’m now able to use it. I don’t have any images for the moment but the Microsoft/nanoserver image is available in the Docker Hub:
Let’s download this Nano Server base OS image from the Hub:
Now that I have my Docker OS image, I will start an interactive session with this image:
The container starts and we I accessed to the command prompt where I could check the processes running in my container like PowerShell, cmd…:
I will create a PowerShell script in my container to write a Welcome Container message and exit from my container:
I can now see my new container with the following command:
I will now create a new image from my container’s changes named mywelcomecontainer (it’s not possible to use Upper case for the new container name otherwise you will receive this error “repository name must be lowercase”):
I can finally run my container, it means that a Hyper-V container will be created from my new image named mywelcomecontainer and my PowerShell script will be executed from my container:
As I avoid the Docker run option –rm, I still have my container available if I run a Docker ps –a command. In order to delete my container I can run a Docker rm <containerid>, rmi will delete images if needed.

To conclude, it looks very easy to create images and Hyper-V containers with Docker in Windows Server 2016. The power of Docker is now available in the Windows World and for sure will be used more and more commonly.

When I don’t need I feature, I don’t turn it on, or do not use it because it reduces the possibility to run into issues. Most of the times this is true, however, during the preparation for an RMAN workshop, the RMAN list failure command showed me the following dictionary issue.

I thought first, that it might be related to some incorrect errors shown by the health check (DBMS_HM), because there used to be some issues with that tool. But even after applying the following patch, nothing changed and the error still appears.

Do you like DBCA to create a database from command line, with -silent -createDatabase? On a simple command line you can provision a database, with oratab, tnsnames.ora directory creation and any setting you want. And you can even call a custom script to customize further. But if you want to put it in Data Guard, you have to do the duplicate manually with RMAN. This evolves in 12.2 with a new option in DBCA to do that: dbca -silent -createDuplicateDB -createAsStandby

Limitations

I’ve tried in the Oracle Public Cloud where I just created a RAC database. But unfortunately, this new feature is only for Single Instance:

[FATAL] [DBT-16056] Specified primary database is not a Single Instance (SI) database.
CAUSE: Duplicate database operation is supported only for SI databases.

Ok. RAC is complex enough anyway, so you don’t need that quick command line to create the standby. So I tried with a single instance database:

Ok. That a bit surprising to have a new feature in 12.2 that works only on the architecture that is deprecated in 12.1 but if we think about it, DBCA is for fast provisioning. In multitenant you create a CDB once, put it in Data Guard, and the fast provisioning comes with the ‘create pluggable database’. And deprecated doesn’t mean that we do not use it, and it is good to have a simple command line tools for easy provisioning in non-CDB.

Then, I tried on a non-CDB that I’ve created in 12.2

I’m a big fan of EZCONNECT but I had a few problems with it. What’s worth to know is that there is no ‘impossible to connect’ message. When it cannot connect, the following message is raised:

[FATAL] [DBT-16051] Archive log mode is not enabled in the primary database.
ACTION: Primary database should be configured with archive log mode for creating a duplicate or standby database.
just because this is the first thing that DBCA checks and this is where it fails when connections is not ok.

This will connect RMAN to the target (here called ‘primary’), with the connect string ORCLA and run a duplicate to create ORCLB as specified.

It starts to create a temporary listener (which is still there in listener.ora even after completion), create the auxiliary instance and run RMAN:Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCLB/orcla.log" for further details.

Through RMAN API, the file names are set:

run {
set newname for datafile 1 to '/u01/app/oracle/oradata/orclb/system01.dbf' ;
set newname for datafile 3 to '/u01/app/oracle/oradata/orclb/sysaux01.dbf' ;
set newname for datafile 4 to '/u01/app/oracle/oradata/orclb/undotbs01.dbf' ;
set newname for datafile 7 to '/u01/app/oracle/oradata/orclb/users01.dbf' ;
set newname for tempfile 1 to '/u01/app/oracle/oradata/orclb/temp01.dbf' ;

and the DUPLICATE FOR STANDBY FROM ACTIVE is run:
duplicate target database
for standby
from active database
dorecover
spfile
set 'db_recovery_file_dest_size'='8405385216'
set 'compatible'='12.2.0'
set 'undo_tablespace'='UNDOTBS1'
set 'dispatchers'='(PROTOCOL=TCP) (SERVICE=ORCLAXDB)'
set 'db_name'='orcla'
set 'db_unique_name'='ORCLB'
set 'sga_target'='2281701376'
set 'diagnostic_dest'='/u01/app/oracle'
set 'audit_file_dest'='/u01/app/oracle/audit'
set 'open_cursors'='300'
set 'processes'='300'
set 'nls_language'='AMERICAN'
set 'pga_aggregate_target'='757071872'
set 'db_recovery_file_dest'='/u01/app/oracle/fast_recovery_area/orcla'
set 'db_block_size'='8192'
set 'log_archive_format'='%t_%s_%r.dbf'
set 'nls_territory'='AMERICA'
set 'control_files'="/u01/app/oracle/oradata/orclb/control01.ctl", "/u01/app/oracle/fast_recovery_area/orcla/ORCLB/control02.ctl"
set 'audit_trail'='DB'
set 'db_domain'='compute-usslash.oraclecloud.internal'
set 'remote_login_passwordfile'='EXCLUSIVE'
reset 'local_listener'
reset 'db_file_name_convert'
set 'log_archive_dest_1'='location=/u01/app/oracle/fast_recovery_area/orcla'
reset 'event'
reset 'remote_listener'
nofilenamecheck;
}

The parameters are coming from the ‘primary’ and adapted for the new database. Be careful. This is where I prefer to review the parameters before. For example, when you duplicate to clone the primary (without the -createAsStandby) you probably don’t want to keep the same log_archive_dest that was set in a Data Guard configuration. I’ll have to post a blog about that.

At the end, the standby database is opened read-only, so be careful to close it before starting the apply of redo if you don’t have the Active Data Guard option.

Data Guard

DBCA doesn’t go beyond the DUPLICATE. And you can use it also in Standard Edition to setup the manual standby.

I hope that one day we will have an option to create the Data Guard configuration in the same process, but here you have to do it yourself:

No tnsnames.ora entry is added for the standby

The static listener entries are not added in listener.ora

No Data Guard configuration is there

The Data Guard Broker is not started except if it was set in advance to true on primary

No standby redo logs are created (except when they were present on primary)

You can set dg_broker_start=true and create the standby redo logs on a post-script that you call with the -customScripts argument. However, the best way is to do it in advance on the primary, and then the duplicate will do the same on the standby.

So what?

You don’t need this new feature because it is easy to automate it yourself. It’s just a copy of spfile parameters, with a few change, and a RMAN duplicate command. But your scripts will be specialized for your environment. Generic scripts are more complex to maintain. The big advantage to have this integrated on DBCA is that is designed for all configurations, and is maintained through versions.

In the latest post I’ve run a cached SLOB workload on Oracle Cloud IaaS to measure logical reads per seconds on a system covered by 2 processor licences (so 4 OCPs). Just as a comparison, here is the same on Oracle PaaS database as a service.

Faster CPU but lower logical reads processed by seconds… Don’t look only at the specs when choosing an instance type. Test it with your workload…

Besides performance, I really like the Oracle Cloud PaaS for Database. You have easy provisioning (a few clicks) but still full access (root, grid, oracle, sysdba). There is no competitor on that. In other clouds, either you go IaaS and you have to install and configure everything yourself, or you go PaaS and you have very limited admin access. Here you have both.

Let’s say, the tnsnames.ora is a quite important file on your system, and you want to make sure that you notice when someone changes the file. Taking a look at the modification time of that file would be good idea, or not?

Per default, the ls -l command show only the (mtime) modification time. In my case, I know that the tnsnames.ora was changed on “Feb 9 11:24″.

But in reality, more time stamps are stored. The atime, the ctime and the mtime.

atime is the access time (only stored in filesystem is not mounted with the noatime option)

ctime is the change time, meaning the inode was change, e.g. with the chmod command

mtime is the modification time, meaning the content changed

The ctime is often misinterpreted as “creation time”, but this is not the case. The creation time of a file is not recorded with XFS. There are other file systems that can do it, like ZFS, but XFS does not support “creation time”. You can use the stat command to see all time stamps in one shot.

Cool, now I know that the file was changed at “Feb 9 11:31″. But how reliable is that information? With the touch command, I can easily change the modification time to any value I like. e.g. I can set it to the same date as beforehand.

Only the ctime (change time) is not so easy to change. At least not with the touch command. For changing the ctime you need to invoke the file system debugger or stuff like that. In the end, monitoring my tnsnames.ora file changes by time is not so precise. So why not using the XFS extend attribute feature to help me. e.g. I could create md5 check sums and when the check sum differs, I know that the content was changed. Let’s do it with the root user.

To overcome this issue, XFS uses 2 disjoint attribute name spaces associated with every filesystem object. They are the root (or trusted) and user address spaces. The root address space is accessible only to the superuser, and then only by specifying a flag argument to the function call. Other users (like the oracle user in my case) will not see or be able to modify attributes in the root address space. The user address space is protected by the normal file permissions mechanism, so the owner of the file can decide who is able to see and/or modify the value of attributes on any particular file.

Now you have a good chance to find out if the file content was changed or not, by simply checking if the file has a different check sum.

Conclusion

XFS extended attributes are quite powerful features and you can use them in a lot of scenarios. Take care that you have a backup solution that support extended attributes, else you will lose all the information once you restore your data.

This blog post comes from a very interesting discussion with one of my friends about the read-only capabilities of secondary replicas in the context of distributed availability groups. Initially, distributed availability groups are designed to address D/R scenarios and some migration scenario types as well. I already discussed about of one possible migration scenario here. However, we may also take advantage of using secondary replicas as read-only in Reporting Scenarios (obviously after making an assessment of whether the cost is worth it.). In addition, if you plan to introduce scale-out with secondary replicas (even with asynchronous replication) you may consider to use distributed availability groups and cascading feature which will address network bandwidth overhead especially if your cross-datacenter link is not designed to handle heavily replication workload. Considering this last scenario, my friend’s motivation (Sarah Bessard) was to assess distributed availability groups in the replacement of SQL Server replication.

As a reminder, SQL Server 2016 provides new round-robin feature with secondary read-only replicas and extending it by including additional replicas from another availability group seems to be a good idea. But here things become more complicated because transparent redirection and round-robin features sound promising but in fact let’s see if it works when distributed availability group comes into play.

Let’s have a demo on my lab environment. So for the moment two separate availability groups which run on the top of their own Windows Failover Cluster – respectively AdvGrp and AdvGrpDR

At this stage, we will focus only on my second availability group AdvDrGrp. Firstly, I configured read-only routes for my 4 replicas and here the result:

URL read-only routes and preferred replicas are defined for all the replicas. I defined round-robin configuration for replicas WIN20161SQL16\SQL16 to WIN20163SQL16\SQL16 whereas the last one is configured with a preference order (WIN20163SQL16\SQL16 first and WIN20164SQL16\SQL16 if the previous one is not available).

After configuring read-only routes, I decided to check if round-robin comes into play before implementing my distributed availability group. Before running my test I also implemented a special extended event which includes read-only route events as follows:

Performing the previous test after applying the new configuration gives me a different result this time.

It seems that the round-robin capability is not correctly performed although I used the same read-only routes configuration. In the same way, taking a look at the extended event output gave me no results. It seems that transparent redirection and round-robin features from the listener did not come into play this time.

Let’s perform a last test which includes moving AdvDrGrp availability to another replica to confirm transparent redirection does not work as we may expect

Same output than previously. The AdvDrGrp availability group has moved from WIN20163SQL16\SQL16 replica to WIN20164SQL16\SQL16 replica and the connection reached out the new defined primary of the second availability group (secondary role from the distributed availability group perspective) meaning we are not redirected on one of defined secondaries.

At this stage, it seems that we will have to implement our own load balancing component – whatever it is – in order to benefit from all the secondary replicas and read-only features on the second availability group. Maybe one feature that Microsoft may consider as improvement for the future.

I’ve blogged recently about the Oracle Core Factor in the Clouds. And then, in order to optimize your Oracle licences, you need to choose the instance type that can run faster on less cores. In a previous blog post, I tried to show how this can be complex, comparing the same workload (cached SLOB) on different instances of same Cloud provider (Amazon). I did that on instances with 2 virtual cores, covered by 2 Oracle Database processor licences. Here I’m doing the same on the Oracle Public Cloud where, with the same number of licenses, you can run on 4 hyper-threaded cores.

Trial IaaS

I’m running with the 30-months trial subscription. I did several tests because they were not consistent at first. I had some runs where it seems that I was not running at full CPU. What I know is that your CPU resources are guaranteed on the Oracle Public Cloud, but maybe it’s not the case on trial, or I were working on a maintenance window, or…

Well, I finally got consistent results and I’ve run the following test on the IaaS (Cloud Compute Service) to do something similar to what I did on AWS, with the Bring You Own License idea.

In Oracle Public Cloud, you can run 2 cores per 1 Oracle processor licence. This means that if I have 2 processor licences, I can run on an instance shape with 4 OCPU. This shape is called ‘OC5′. Here it is:

This is really good. This is x2.8 more LIOPS than the maximum I had on AWS EC2. A x2 factor is expected because I have x2 vCPUS here. But CPU is also faster. So, two conclusions here:

There is no technical reason behind the reject of core factor on Amazon EC2. It is only a marketing decision.

For sure, for same Oracle Database cost, Oracle Cloud outperforms Amazon EC2 because is is cheaper (not to mention the discounts you will get if you go to Oracle Cloud)

So what?

This is not a benchmark. The LIOPS may depend a lot on your application behaviour, and CPU is not the only resource to take care. But for sure, the Oracle Public Cloud IaaS is fast and costs less when used for Oracle products, because of the rules on core factor. But those rules are for information only. Check your contract for legal stuff.

Relying to much on the RMAN Data Recovery Advisor is not always the best idea. In a lot of situations, it tells you the right things, however, sometimes it tells you not the optimal things, and sometimes, RMAN list failure does not show any failure at all, even if there is one.

So … let’s simulate quickly a loss of a datafile during the normal runtime of the database. The result is a clear error message which says that the datafile 5 is missing.

Of course, I could shutdown the DB, and then startup again which would trigger a Health Check, but shutting down an instance is not always so easy on production systems. Especially when only one datafile is missing, but all others are available and only a part of the application is affected.

The solution to that issue, is to run a manual health check. Quite a lot of health checks can be run manually, like show in the following documentation.

The Oracle PDBaaS is for database developers. And database developers may need to trace what happens with their queries: SQL trace and Optimizer trace. Let’s see what we can do on Exadata Express Cloud Service

V$DIAG_TRACE_FILE_CONTENTS

On the managed PDBaaS you don’t have access to the filesystem. But in 12.2 you have a view that can display the content of the trace files: V$DIAG_TRACE_FILE lists the files you can access (that pertain to your container) and V$DIAG_TRACE_FILE_CONTENTS can display the content.

Here is an example how to read it.
I get my tracefile name from v$process:
SQL> column tracefile new_value tracefile
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));

And read the ‘payload’ from the view:
SQL> set linesize 4000 pagesize 0 trimspool on
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile';
old 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile'
new 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc'

no rows selected

Of course, I didn’t enable any trace, so the file is empty.

10046

You can’t enable SQL Trace in the Exadata Express Cloud Service. This is disabled by lockdown profile and by not granting execute to the packages that can do it.
Here is what I tried. Please tell me if you have other ideas:SQL> alter session set sql_trace=true;
ERROR:
ORA-01031: insufficient privileges

*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SESSION", line 178
ORA-06512: at line 1

SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop; END;

SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop; END;

No doubt, we can’t sql_trace. I’m not sure it is a big problem because we have all options in the Exadata Express Cloud Service so we have ASH, SQL monitor, etc. I’m not saying that sql_trace is not useful anymore – there are cases where you need to see the wait events one by one – but from development point of view an plan with execution statistics should be sufficient.

10053

Tracing the CBO is a different thing. There is no alternative when you want to understand the choices of the optimizer. It is not something I use every day, but there are some cases where it is the only tool to troubleshoot.

Those views are new in 12.2 and are declined to show only part of files so that you can grant access to read those files only for 10046 (SQL Trace) or for 10053 (Optimizer Trace), for all sessions or only the user’s one.

GV$DIAG_TRACE_FILE (from x$dbgtflist) and GV$DIAG_TRACE_FILE_CONTENTS (from x$dbgtfview) show all files from the ADR traces
GV$DIAG_APP_TRACE_FILE (from x$dbgatflist) is a subset showing all files containing SQL Trace or Optimizer Trace
GV$DIAG_SQL_TRACE_RECORDS (from x$dbgtfsqlt) is a subset showing all files containing SQL Trace
GV$DIAG_OPT_TRACE_RECORDS (from x$dbgtfoptt) is a subset showing all files containing Optimizer Trace
V$DIAG_SESS_SQL_TRACE_RECORDS (from x$dbgtfssqlt) and V$DIAG_SESS_OPT_TRACE_RECORDS (from x$dbgtfsoptt) are similar, but for your session (and then no GV$ as your session is on one instance only).

A new role APPLICATION_TRACE_VIEWER grants to select on views that show only SQL and Optimizer traces.

We can access SQL Trace and Optimizer trace and this is very nice. Access to trace has always been a problem because they are on the server, may contain sensitive data, etc. Having views to give access easily and in a controlled way is a very nice 12.2 feature.
I don’t know if enabling Optimizer trace on Exadata Express Cloud Service is expected, or just something that was forgotten by the lockdown profile. I hope the first hypothesis is the right one and I hope that we will be allowed to enable SQL Trace as well. This service is for developers, and I’m a big advocate of giving all tools to developers so that performance is addressed before production.

Not to often, but sometimes you need to offload your RMAN backups and your DataPump exports to CIFS. You might run out of space on your NFS server, or you are a Windows shop who has only CIFS shares, and then you might need to put your RMAN backups to CIFS.

One very important MOS Note regarding RMAN backups to CIFS is the following

Is use of CIFS Protocol for RMAN backups supported? (Doc ID 444809.1)

Running a RDBMS on CIFS is clearly not supported because of CIFS does not guarantee atomic write of 512 byte blocks. On the other hand, RMAN is not relying on 512-byte atomic writes, and so it is fine to use RMAN to CIFS.

However, CIFS is not certified by Oracle and the RDBMS development team can not be involved is case there are any issue. The MOS note was last updated on November 2013, so it might be not a bad idea to double check it with Oracle again.

In earlier OS and DB version, there used to be issues with filesystemio_options SETALL,DIRECT or ASYNCH, and so I wanted to double check if this is still the case with more recent releases.

I have done the tests with Oracle 11.2.0.4 and 12.1.0.2 on OEL 6.8 with Samba 3.6.

I start with filesystemio_options SETALL and RMAN and Datapump with 18 parallel sessions, to make sure that I hit the CIFS share quite hard. Before we start the Test, we have to mount the CIFS share. All mount options are documented at the following link.

user = M$ User to connect to the CIFS share
rw = mount read write
uid = sets the uid that will own all files or directories on the mounted filesystem
gid = sets the gid that will own all files or directories on the mounted filesystem
rsize = default network read size
wsize = default network write size

Regarding the rsize/wsize I have used the ones, which I usually take when using NFS mounts.

Before running any RMAN backups to it, make sure that you can create files with the oracle user. If it is failing at this step already, then you can stop here. This has to work before continuing any further.

I have repeated the same test with 12.1.0.2 also with no issues. In earlier releases I had issues with the filesystemio_options=SETALL, where I had to disable directio, and sometimes direct and aynch io. But it is not the case anymore with more resent OEL and Samba releases.

Conclusion

I would not use RMAN backups to CIFS as a permanent solution, but if you are running out of space on your NFS share, or maybe for migrations it is a good alternative.

In Oracle Public Cloud, Transparent Data Encryption is not an option. You can use it because it is included in all database services for all editions. You have to use it because the database won’t allow you to create, or import, non encrypted tablespaces. This is controlled by a new parameter, encrypt_new_tablespaces, which defaults to CLOUD_ONLY;

encrypt_new_tablespaces

In previous versions, we had to ENCRYPT explicitly the tablespace in the CREATE TABLESPACE statement.
Here we don’t need to add this clause in the DDL when we are on the Oracle Public Cloud because the default is:SQL> show parameter encrypt

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string CLOUD_ONLY
This parameter can take the following values:
SQL> alter system set encrypt_new_tablespaces=tictactoe;
alter system set encrypt_new_tablespaces=tictactoe
*
ERROR at line 1:
ORA-00096: invalid value TICTACTOE for parameter encrypt_new_tablespaces, must be from among DDL, ALWAYS, CLOUD_ONLY

The CLOUD_ONLY will automatically encrypt new tablespaces when we are on the Oracle Public Cloud only.
The ALWAYS will always encrypt new tablespaces, which is good if you have the Advanced Security Option and want to ensure that all data is encrypted
The DDL will never encrypt new tablespaces, except when explicitely done from the CREATE TABLESPACE. This is similar to the behavior before this parameter was introduced.

encrypt_new_tablespaces=DDL

If you are not on the Oracle Public Cloud, CLOUD_ONLY, the default, behaves as DDL. If you are on the Oracle Public Cloud, CLOUD_ONLY behaves like ALWAYS.

You may think that you can bypass the obligation to encrypt, just by setting encrypt_new_tablespaces=DDL but it’s not a good idea. Let’s test it.

I’ve set encrypt_new_tablespaces=ddl in an init.ora and I’ve created a database manually (CREATE DATABASE, catalog, catproc) and it works.

In 12.2 you can encrypt SYSTEM, SYSAUX and UNDO but this is not mandatory. However, I have created a user tablespace, USERS without any problem thanks to encrypt_new_tablespaces=DDL:

It is impossible to open a database with unencrypted user tablespaces in the Oracle Public Cloud, whatever the encrypt_new_tablespaces is. I can only startup mount and there’s nothing to do at that point.

You don’t find those “Verifying all user tablespaces in pdb 0 are encrypted in Oracle Cloud..” when you are not in the Oracle Public Cloud.
But in Oracle Public Cloud, the instance is forced to stop as soon as an un-encrypted tablespace is found.

Fake it

What I’ll do now is only for academic purpose, to understand what happens and, maybe, troubleshoot if you have created unencrypted tablespaces. But it is not documented, and not supported.

From a cloud instance, you can get metadata about your instance by getting attributes from http://192.0.0.192 (Oracle Cloud uses Nimbula)
When Oracle Database instance starts, it reads the dns domain from http://192.0.0.192/latest/attributes/dns/domain:
[oracle@DBI122 ~]$ curl http://192.0.0.192/latest/attributes/dns/domain
compute-franck.oraclecloud.internal.[oracle@DBI122 ~]$

I suppose that the detection of Oracle Cloud is done from that. If you are not in Oracle Cloud, you will have no answer from http://192.0.0.192 so let’s simulate that by blocking this ip address:
[root@DBI122 opc]# iptables -A OUTPUT -p 192.0.0.192 -j REJECT

encrypt_new_tablespaces=DDL is not a way to avoid encryption in the Oracle Public Cloud, except temporarily for a test on a database that you create for that and never re-start once you have created user tablespaces. There are still some bugs and issues with TDE (when importing from non-TDE, when using local undo,…) so it is good to know that there can be a workaround. But remember this is not supported. If you need this, please contact My Oracle Support.

Note that I’m more interested by the CPU time because the elapsed time includes the time to write to the trace.
I’m reading a 2 million line 10053 trace file here. The CBO takes time but is doing lot of work here.

My awk script also records the maximum time spend on each query block and then displays the ‘Query Block Registry’ with those times. This is a good way to understand which CBO transformation is responsible for most of the parse time:

The interesting point here is that the [FINAL] is the transformation that is chosen by the optimizer, so we know that we have spent time on a query block that has been finally chosen for the best plan.

Before going further, here is my ugly awk script that gives the above output from a 10053 trace file:

So what can I do from now? I can avoid the subquery unnesting by adding the NO_UNNEST hin in the subquery, or add it to the main query as /*+ NO_UNNEST( @SEL$10) */

This really reduces the parse time, but I have to check that the plan is still acceptable. Actually we cannot blame the CBO for the time it takes here, because the goal of subquery unnesting (SU) is exactly that: open the way to lot of new access path, that can be cost based and may give a better execution plan. And an application should not parse too often, so spending a few seconds in parsing is not bad if it helps to execute the query quickly all over the day.

The separation of roles between system DBA and application DBA is the way to go for agile development and the PDBaaS managed service is an excellent way to play with this concept: You are PDB administrator but not the CDB administrator.
Here is an example about tablespace creation/deletion, and the kind of problems that may arise with this architecture.
The PDB administrator manages all the PDB objects. He must be able to create users and tablespaces in order to deploy an application.

Create tablespace

So yes, in Exadata Express Cloud Service we can create a tablespace and this is what I did:
SQL> create tablespace IOPS datafile size 10G;
Tablespace created.

and you don’t need to specify the datafile location because db_create_file_dest is defined:
SQL> show parameter db_create_file_dest

With lockdown profiles, you have always the same message: no clue about what is not authorized. I know that there is a DROP_TABLESPACE_KEEP_DATAFILES feature that you can disable with lockdown profiles and this makes sense when the CDB administrator do not want that PDB administrators leave dead datafiles in the system. Here we are on ASM, with OMF, so the datafiles are automatically dropped. But the lockdown is working at statement syntax level, so we have to mention the clause:
SQL> drop tablespace iops including contents and datafiles;

drop tablespace iops including contents and datafiles
Error report -
ORA-38881: Cannot drop tablespace IOPS on primary database due to guaranteed restore points.
38881. 00000 - "Cannot drop tablespace %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to drop a tablespace on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo dropping of a tablespace.
*Action: Drop all guaranteed restore points first and retry, or delay dropping the tablespace until all guaranteed restore points are removed.

So the syntax is accepted, but here I have another problem. I have a guaranteed restore point and this prevents the drop of tablespace.

Let’s have a look at restore points because I didn’t create one (this is something were are not allowed to do on Exadata Express Cloud Service, which would be a nice feature as this servie is focused at developers).
SQL> select * from v$restore_point;

The PDB has been created when I subscribed to the service, on 22-JAN-17 and we are now 03-Feb-17.

So it seems that the CDB administrator (it is a managed service, CDB DBA is Oracle) has created a restore point last Saturday.
The name, PRE_17_1_7, looks like something we do before a maintenance, in case something goes wrong. I had no notification about any maintenance. And anyway, we usually remove the restore point as soon as possible because this fills the FRA.

So far so good, but I’m stuck here. The restore point is at CDB level, so I cannot drop it:
SQL> drop restore point PRE_17_1_2;

drop restore point PRE_17_1_2
Error report -
ORA-38780: Restore point 'PRE_17_1_2' does not exist.
38780. 00000 - "Restore point '%s' does not exist."
*Cause: The restore point name of the DROP RESTORE POINT command does not exist.
*Action: No action required.

This means that I cannot drop my tablespace. And I cannot even resize the datafiles to their minimum:
SQL> alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M;

alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M
Error report -
ORA-38883: Cannot shrink data file +DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351 on primary database due to guaranteed restore points.
38883. 00000 - "Cannot shrink data file %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to shrink a data file on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo the shrinking of a data file.
*Action: Drop all guaranteed restore points first and retry, or delay the data file resize until all guaranteed restore points are removed.

I have an empty tablespace that takes all my allocated storage and I cannot remove it.

So what to do? Try to contact support? Or fill the FRA until raises an alert?
I would try the first one but I received a CSI with my order, but it’s not a valid one…

Last Day a client asked me if I can generate a relational diagram for an oracle schema. He was just preparing a migration and wanted to see how tables are organized in the schema.
In this article we will show how this is possible with SQL Developer . We are using SQL Developper 4.1.3.20
Once SQL Developer started just proceed as following :
Launch File ==>Data Modeler ==>Import==>Data Dictionary
Choose your connection and Click Next
And then after we can choose the schema
In this example we check all tables and click Next
Review the summary
And Then Click Finish