Yann Neuhaus

Multitenant has been introduced in 12.1.0.1 with the goal to share resources but isolate data. However, having all PDBs in the same root may be convenient to manipulate data in multiple PDBs. In the first patchset, 12.1.0.2, a way to query cross-container has been introduced for the CDB administrator to see data in other containers. In the second release, 12.2.0.1, this goes further with the introduction of Application Containers and cross-PDB DML. Currently, not all possibilities are documented and not all documented features are actually working. This will probably improve in next patchset. I’ll start here with something simple: insert from root into a table which is in a PDB.

looking at the transactions sessions, the ones on the PDBs looks like a database link connection:
22:48:15 SQL> select taddr,con_id,program,action,module from v$session where saddr in (select ses_addr from v$transaction);

However, when using CONTAINERS() the session is not using the database links but something like parallel query switching to the containers. This means that it is not the same transaction and we don’t see the modifications:
22:48:15 SQL> select * from containers(DEMO);
no rows selected

Now, I commit:
22:48:15 SQL> commit;
Commit complete.

and all transactions are ended:
22:48:15 SQL> select taddr,con_id,program,action,module from v$session where saddr in (select ses_addr from v$transaction);
no rows selected

the links are still opened but not in a transaction anymore:
23:10:21 SQL> select * from v$dblink;

or from each PDB:
22:48:15 SQL> alter session set container=PDB1;
Session altered.

22:48:15 SQL> select * from DEMO;

N TEXT
- ----
1 Cross-container insert

So what?

This is a convenient way for the CDB administrator, or for the Application Root administrator, to do some DML on different containers, without having to create a database link. Of course, the common user can also switch to a PDB using the ‘alter session set container’ but this one does not allow to have a transaction that spans multiple containers. You can think of it as a shortcut to avoid creating database links from the root to its containers.

Started on April, 5th at 9:00 AM, I was immersed in a wonderful and amazing Open Ceremony among light effects and an epic music. Indeed, using PowerShell is always an epic moment…

Then, lights turned on and all the speakers stood up and come behind the scene. As you may see, they came from all around the worlds to bring us the best of PowerShell!

This Open Ceremony was followed by the Keynote “State of the Union” presented by Jeffrey SNOVER, also know as the “father” of Windows PowerShell. He really congratulated the PowerShell Team for their great work which now make us possible to use our favorite Shell on Linux environments. Moreover, he described this Transformational Change as a logical evolution of the tool.

Before the lunch, Will SCHROEDER presented how in the last years, Red Team and Blue Team played cat and mouse regarding the security. As a conclusion for this session, we can admit notable improvement in security was performed over the PowerShell versions.

If you’ve got a question, it was the place to go!

In the afternoon, you had the choice between four different streams. I decided to learn more about PowerShell on Linux and to attend to “Hell freezing over: PowerShell on Linux and GitHub” presented by Bartosz Bielawski.

If you wanted to obtain a good overview of what are Windows Server Containers and how convenient it is, the session “Getting started with Windows Server container” presented by Flynn Bundy was great.

Then, the session “Look! Up in the sky! It’s a bird. It’s a plane. It’s Nanoman!” presented by Aleksandar Nikolic and Jan Egil Ring gave you key clues to connect and administrate a Windows Server Nano with PowerShell.

And finally, I discovered an interesting PowerShell module called Pester which automate code verification. So it can be used as a MSSQL instance policy check but with PowerShell. This session called “Green is bad Red is Good – Turning your Checklists into Pester Tests” was presented by Rob Sewell.

The second day, Bartosz Bielawski presented an other session about PowerShell on Linux called “Knock, knock, knock – Linux at your door”.

An interested session presented by Jason Yoder called “FASTER!!!! Make Your Code Run Faster!”, gave some key clues and approaches to optimize your PowerShell Code.

In the afternoon, Rob Sewell rocked again with “Using the SQL Server Provider”

Then I followed Mathias R Jessen’s session named “Regex 2.0: Full Coverage”.

During the last day, I wanted to be initiated to Visual Studio Code. So I went to David Wilson’s session “Advanced PowerShell Module Development with Visual Studio Code”.

You can find dba tools on the official website. I highly recommend to take a look, it’s worth it!

I will finish my Road Trip with Luc Dekens’ session called “Visualising performance in VMware vSphere”.

Many interesting and pertinent sessions happened at this PowerShell Conference Europe 2017. If you want to learn more about this event, let’s take a look to the official website. And the slides and demos are on GitHub.

For the last conference day, after the Keynote about “JavaScript: The next Language YOU should learn” by Dan McGhan , I decided to attend presentations on following topics:
– Upgrade your APEX app with zero downtime by using EBR
– Understand and make use of CSS(3)
– Best Practices for APEX Administrators
– APEX Version Control and Team Working
– Database Cloud Services with APEX
– Date, Time, Calendar and Co with APEX
I also got the chance to have a 1:1 talk with Marc Sewtz to expose some wishes about APEX and talk about some customer issue.

JavaScript programming language
JavaScript allows asynchronous programming: main thread can make calls to asynchronous API which return there result via an event/callback queue.
When the web browser parses the page to be rendered the DOM tree is built. That build process is stopped whenever JavaScript is found, until the JavaScript execution is completed. That means the JavaScript can only address the the element that where already put into the DOM tree, not the full page. That explains why people would want to put the JavaScripts at the bottom of the page definition.
Fortunately, there is the JQuery DOM manipulation library which allows developer to abstract from that constraint.
What you also need to keep about JavaScript is:
– functions are the first class
– functions provide scope and closure
In APEX you can make use of JavaScript for AJAX calls by using apex.server.process.
JavaScript is definitely a language that APEX developer should master beside PL/SQL to embrace customer requirements.

Upgrade your APEX app with zero downtime by using EBR:
EBR (Edition Based Redefinition) is a tool in the Oracle database to support online application upgrades.
You can find details about EBR in following Oracle documentation: Technet Edition Based Redefinition
EBR allows you to have your DB looking like 2 DBs, each identified in the normal way by a service (Just like 2 CDBs in the same CDB).
Hot rollover will be enabled by using a traffic director (load balancer).
This introduces end user session to application version affinity.
In APEX you need to create a copy of your application with the changes for the new version. The application version switch is managed with APEX_UTIL.SET_EDITION setting the value of the current application in the APPLICATION_PREFERENCES table.

Understand and make use of CSS(3):
CSS defines the layout design. The basic structure is: selector { attribute : value }
There are numerous selectors which can be combined. Also pseudo classes and elements can be used as reference.
You need to be careful about the cascading order, some strong rules are set to define them. The “!important” tag which overrules should be used as less as possible.
The definition of objects to be rendered is based on the Box Model (Margin – Border – Padding – Content).
There are 2 levels for the definitions:
– Block
– Inline (mainly for the text)
Positions also have multiple definition references:
– static
– relative
– fixed
– absolute
Media queries will allow the definitions for responsive design.
You can verify if some elements your want to use are supported by your web browser by checking the following web site:http://www.caniuse.com/
I can also recommend you to visit W3C site: https://www.w3schools.com/css/

Best Practices for APEX Administrators:
Following best practice rules were presented based on presenter’s experience:
– Create a dedicated folder to store the APEX installation package (e.g. under the Oracle “product” folder)
– Create a dedicated tablespace for APEX
– Put every worspace on it’s own schema with dedicated tablespace
– Build your own workspace delivery system
– Restrict access workspace administration
– Use a version dedicated folder instead of /i/
– Rename ords.war based on application using it
– Setup automated export and backup of the workspaces and applications
– When patching also keep a copy of the full install package
– Manage downtimes during update operations (only required while sql is running)
– Set ACLs to give only required acces

APEX Version Control and Team Working:
There are different ways to manage team work and version control for APEX development, but none are identical to what is done for other programming languages due to the way APEX works.
What is common in the exposed ways is that the development cycle is about 3 weeks and there is always a Development, an integration, a test and a production environment. Code related to the database (DML, packages, …) is stored and managed with a version control system like GitHub and APEX application exported and pushed to the version control system on a daily basis.
Some people use CI (Continuous Integration) engine to generate a VM with the full test environment from the committed development work.
To manage the deployment of selective features developed you need to use conditional build.
There are different way to export/import APEX applications:
– ApexExport java class
– “apex export” and “apximp” in SQLcl
– “Application Archive” packaged application
– manual export/import
Oracle provides some white paper describing best practices to manage APEX development process:http://www.oracle.com/technetwork/developer-tools/apex/learnmore/apex-life-cycle-management-wp-3030229.pdf

Database Cloud Services with APEX:
The requirements were about the setup of a private cloud to host about 200TB of data over 300000 schemas.
In order to be able to properly manage the infrastructure following tolls have been created in APEX:
– DB service portal (request management, password reset, reporting, cost tracking, approvals,…)
– DB metadata manager (reporting, interfacing, measurement of space, cpu, aso)
– DB service automation (order management, log management, messaging, maintenance management)
This allowed to raise customer satisfaction, enhance DBA efficiency and metadata maintenance.

Date, Time, Calendar and Co with APEX:
Dates can be represented in different ways in the Oracle DB:
– DATE
– TIMESTAMP
– TIMESTAMP WITH TIME ZONE
– TIMESTAMP WITH LOCAL TIME ZONE
If you dump those data types you will see the details of there implementation and how the Oracle DB stores them.
If you subtract a number to a DATE it will return a decimal number and if you add a number to a DATE you get a DATE.
There are function to manipulate dates like ADD_MONTHS (for DATE) and INTERVAL (for TIMESTAMP).
The EXTRACT function allows to get specific elements of a timestamp.
APEX items are always of VARCHAR2 type, so any date manipulation with implicitly use TO_CHAR and TO_DATE conversion.
The date format is to be set in the APEX application globalization parameters. Most of those parameters map to the DB NLS values by default.
In APEX 5.1 there were some additions to the Calendar like:
– ability to define Dynamic Actions
– JavaScript initialization on parameters
– …
I would recommend to play with the Calendar Sample application to see all new capabilities.

The installation is very simple, in the features Selection window, you have in the Shared Features, you have a Machine Learning Server (Standalone) with R and Python.
I check Python to install the Python Server and here we go.
Like for the Python services, you need to accept “Consent to install Python”
Don’t forget to give access to Internet to your server. If not, the installation failed and you have this message:
With Internet, the installation is successful.
At the end of the installation, I open the summary file to analyze what is installed.
I find feature parameters for the script installation for the Python Server:

SQL_SHARED_AA for the Machine Learning Server (Standalone) – AA is for ADVANCED ANALYTICS

SQL_SHARED_MPY for Python

At this point, I have a Python Server but, now, what is a Python Server?
On my server, I open the search and tape Python as keyword…
No result found… Ok. Old method, go to the folder and search manually.
I go directly in the folder “C:\Program Files\Microsoft SQL Server\140” and I see a folder “PYTHON_SERVER”:
I open the folder “PYTHON_SERVER” and see three executables:

Uninstall-Anaconda: to uninstall the python server

Python: Command Console to execute Python code

Pythonw: Command Console to execute Python code without Window

To Test the Python server, I run the executable python (to have a cmd window) and use the same example as my article for the Python service.
The Python query is:

After the Keynote about “Oracle Application Express – Ahead of it’s time. Leading the way forward” by Joel Kallmann (Director of Software Development for APEX) and some overall presentation about Development with APEX 5.1, I decided to attend presentations on following topics oriented towards tools to make developer’s life easier:
– APEX OOS Tools and helper
– Useful APEX plugins
– LESS is more
– JavaScript Debugging and Tuning
– Introduction to Oracle JET

Oracle Application Express – Ahead of it’s time. Leading the way forward:
In 2008 APEX was already matching the definition of cloud given by the NIST (National Institute of Standards and Technology) and even matching the definition of PaaS in 2002: APEX was ahead of it’s time!
APEX allows to increase productivity and consistency, reducing complexity due to it’s declarative framework setup. It’s therefore to be seen as lowcode development environment: https://apex.oracle.com/lowcode/
What is impressive about APEX is that a packaged app written in 2008 is still able to run on apex.oracle.com today.
It’s the most used development platform within Oracle for running there internal business.
There is now an online APEX curriculum available for free on Oracle academy: APEX on Oracle academy
Oracle database free services online will be launched, replacing apex.oracle.com.

LESS is more::
What is LESS?
LESS is a CSS pre-processor which allows to use variable, mix-ins and nested rules to facilitate the rules CSS management.
It’s already in use in APEXso that you can leverage thattool to adjust your application to your Corporate identity guidelines.
The Theme roller of APEX 5.x make use of it, but the core.css of APEX is not modified that way.
I would suggest you to visit following website if you are interested in who LESS works:http://lesscss.org/

JavaScript Debugging and Tuning::
APEX makes use of JQuery.
In some cases you might run into performance issues while loading or using your application pages.
Here are some tips and tricks to optimize JavaScript in your application based on experience.
– Use the right selector to reducing searches in the DOM (from the most to the less selective: id, element, class, attribute, pseudo)
– Use loops the right way (arrays can help)
– Use variables to reduce DOM access
– Use detach function to have an “offline” DOM and reduce the cost of parsing and accessing the DOM
– In some case native JavaScript is faster than JQuery
There are tools to help you measure the performance of your page:
– jsperf.com
– ESBench.com
– Chrome Canary developer tools
When it comes about page load performance, the size of your JavaScript library file is key. This can be reduced using tools like uglifyJS.
For Debugging and Logging purpose you can make use of the client console.log or even better the APEX.debug wrapper on it. Unfortunately those logs are only visible on the client console during runtime. One option to centralize them would be to write the result of the logs into a DB table using AJAX. Also stacktrace.js is of help as it captures the user context which can then be put with the logs to better understand the issue.
Ultimately REST service could also be an alternative to send back the logs to the DB.

Introduction to Oracle JET:
Oracle JET: JavaScript Extension Toolkit
It supports multilingual and follows W3C standards.
A JET module is always made of at least 2 parts:
– JavaScript file (view Models)
– HTML file (views)
When using JET modules you always have to take care of the required libraries/modules (dependencies).
APEX only makes use of the JET Charts for now.
I can only recommend to visit the Oracle web page on that subject:http://www.oracle.com/webfolder/technetwork/jet/index.html/

Oracle 12.2 is released and we will have to upgrade. And in this blog we are going to talk about upgrading to oracle 12.2 using the dbupgrade utility.

Of course the dbua tool is still available and can be also used.

Oracle Database 12.1 introduced the Parallel Upgrade Utility, catctl.pl. This utility reduces the total amount of time it takes to perform an upgrade by loading the database dictionary in parallel, and by using multiple SQL processes to upgrade the database.

In Oracle 12.1 catcpl.pl had to be run using perl like$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 -l /home/oracle catupgrd.log
Starting with Oracle 12.2, we can use dbupgrade utility which starts up catctl.pl instead to run it from perl.

In this article we will describe the steps for upgrade using dbupgrade.

The first step is to run the preupgrade information tool preupgrade.jar tool which replaced the preupgrd.sql and utluppkg.sql scripts. This tool can be run from operating system command line. Remember that in 12.1, the Pre-Upgrade Information Tool was run within SQL*Plus as a SQL file.

Once the execution done, the tool will identify possible issues during the upgrade and will also generate 2 scripts
 preupgrade_fixups.sql: fixes issues that an automated script can fix safely. This script must be run before the upgrade
 postupgrade_fixups.sql: fixes the upgrade issues that can be automatically fixed after the upgrade.
For all other issues we have to manually fix them.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> @preupgrade_fixups.sql

Now, the next version of SQL Server, named officially SQL Server 2017 integrate also the language Python.
The first step is to install the Python Services during the installation of SQL Server.
In the Instance Features > Database Engine Services, you have a new structure named “Machine Learning Services (In-Database)” with R and Python.
I check the case “Python” and next button.
After the classical instance, server and database Engine Configurations, a window “Consent to install Python” need your acceptance to be validate and go to the next installation step.
After that, the installation begins and failed…
The eternal question, WHY???…
Like a majority of peoples, I don’t read the text in the window “Consent to install Python”.
The text explains “By clicking Next, you will start the download and installation of these packages to your machine.”. This means that you need an access to Internet.
You can also go the installation log file to see the error message:
After activating the access to Internet to download all necessary packages to use Python, the installation was successful.
I have the curiosity to have a look in the installation file and find the feature parameter for the script installation for Python. In addition of ADVANCEDANALYTICS, to install Python the parameter is SQL_INST_MPY.
Like for R, in the instance configuration, you need to set the ‘external script enabled’ to 1.
If you haven’t read my article on SQL Server 2016: R Services, I rewrite the method to search and configure the service:
Script to search the configuration:

This year again the APEX connect conference spans over three days with mixed topics around APEX, like JavaScript, CSS, SQL and much more.
After the welcome speech and the Keynote about “Reconciling APEX and the Thick Database Paradigm” by Bryn Llewellyn I decided to attend presentations on following topics:
– Temporal Validity and Flashback Data Archive
– Universal Theme and Custom Themes in APEX 5.1
– Using REST Services in APEX applications the nice way
– Uncover the Truth (of APEX application)
– Browser Developer Tools for APEX developers
– Docker for Dummies

Thick Database Paradigm:
What was highlighted by Bryn Llewellyn is that the key to proper Database development, is to encapsulate your database through PL/SQL APIs.
When you work on DB development make your environment such as it can be easily rebuild from scratch with sripts so you have no fear in making anything wrong.
Your schema user should have as less rights as needed so you keep your environment safe.
If you build proper APIs, no mater what kind of client application uses your data (APEX, REST, Java Web App, …), it will be able to interface.

Temporal Validity and Flashback Data Archive:
There is an increasing demand on data history and audit.
Data history means not only keeping track of past data but also managing different versions of the same data over time (e.g. customer delivery address). This is managed by Temporal validity.
Oracle 12c allows to automatically manage such time dependent data by using “ADD PERIOD FOR” on a table.
When retrieving the data use “AS OF PERIOD FOR” in the select statement.
Details can be found on the Oracle website:Implementing temporal validity
Audit can be managed using the well known trigger business with all issues it can generate but also automatically by using flashback archive.
In this second case data audit is written in a specified tablespace for which you define the data retention period. SYS_FBA tables get automatically created and information tracked is managed by setting context level. This is very powerful tool as it also takes in account DML changes.
Also very important for audit purpose, flashback data cannot be modified.
You can find further information on following Blog:Oracle 12c Flashback Data Archive

Universal Theme and Custom Themes in APEX 5.1:
After a brief overview of Theme and Templates history in APEX, we were shown how easy (at least it seems) it is to create and manage custom Theme and Templates.
Template options introduced in APEX 5 aim to reduce the number of templates for a specific “object” type to a minimum in order to ease maintenance.
Live template options have been introduced with APEX 5.1 to have a preview of the changes at run time and facilitate their usage.
Theme subscription allows to distribute changes made to a master Theme which can now be defined at workspace level.
Theme styles allow you to have a dedicated CSS file on top of you application standard CSS and define user based styles from the Theme roller.
Note: Themes based on JQuery Mobile for mobile applications should no longer be used, rather use the Universal Theme responsive as JQuery UI wasn’t updated for long and might have issues with the new JQuery core version that might be used in future versions of APEX.

Using REST Services in APEX applications the nice way:
The definition of REST is based on 3 pillars:
– Resources
– Methods (GET, PUT, POST, DELETE, PATCH)
– Representation (JSON, HTML, CSV, …)
The new REST Client Assistant packaged application in APEX 5.1 will be on a great help for developer as it manages to generate the procedures required to parse JSON data returned by a given REST Data service URL as well as the underlying SQL query to display the data in report.
When the amount of data is becoming to large, REST data services can return them on a pagination fashion which needs to be supported on the client side. At this point only classic report can support that feature in APEX 5.1. Filtering on the data query to the service needs also to be managed. The REST Data Sample application is showing how to implement the different kind of interaction with REST Data services based on Oracle standards.
There will be improvements in supporting REST Data service in the upcoming version 5.2 of APEX, such as remote SQL execution.

Uncover the Truth (of APEX application):
When you have to modify an existing APEX application or take over from customer development you need to understand thee heart of the application which can be a challenge. To do so you need to identify it’s structure and how different elements are used and interact.
Various people are interested in this:
– DB developers
– APEX developers
– Cloud developers
– Project leaders
This is all about:
– Functionality (Page function, application logic, Interfaces)
– Complexity (APEX components, PL/SQL objects, JavaScript, CSS, DB objects)
– Transparency (who changed, when, Conditions, relations between pages)
There are already different tools in APEX allowing to see different aspects of those data:
– Searches
– History
– Application utilities
– Reports
– SQL Workshop
So it can be cumbersome to walk through all those.
We were presented a self developed toll which can be seen as a kind of “Dashboard” to analyze everything in one place base on all sorts of charts reading out the APEX metadata tables. I’m looking forward to seeing it released next summer.

Browser Developer Tools for APEX developers:
The IDE for APEX is the web browser, so it’s important to know about the developer tools provided in your web browser.
Each and every web browser has it’s own tools, some being better than others.
The most used browser has also the most complete tool set: Chrome (Firefox Developer Edition is also worth looking at)
As there are a lot of information to be displayed in the Developer tools, its highly recommended to detach the window from the browser to display it on a secondary screen when possible.
CSS usage is showing all level of the style Sheet to understand what is set where.
Specifications of the web page can be modified on the fly to understand their impact.
Debugging of JavaScript can be done with setting breakpoints.
Application panel allows to monitor cookies.
Device mode allows to emulate mobile devices and even set the network speed to have a better understanding of end user experience.
Even remote debugging can be used on attached devices.

Docker for Dummies
I already heard a lot about Docker but I never took some time to look into it, so I took the opportunity to have it shown today.
What is Docker? A light weight VM?
No, a container!
It allows to share resources and get ride of things like Hypervisor and full target OS (which are used for VMs), which makes it light.
Main characteristics are:
– You can put almost anything into it
– It stays locked
– It’s efficient for transport
– It’s small, light weight
– It’s scallable
Actually it can be seen more as a software delivery platform.
The basic component is an image:
– It contains File System and parameters
– It has no state
– Layers are read only
– Layers are shared
– Updates require only updated files to be downloaded
A container is a running instance of an image. It adds a R/W layer on top of the image.
Images are not cloned.
Persistence of the data used in the container is managed by mapping local folder into the container.
Also Docker is command line based, there is a GUI available called Kitematic.
Resources:Docker websiteKitematic website
You can find further details on following blog:Overview and installation

This week I needed to install Oracle Grid Infrastructure 12c release 1 in a SLES 12 SP1 environment. Everything worked fine until I ran the root.sh at the end of the installation. Here’s a quick description of the problem and the workaround.

After trying multiple times with other Oracle Grid Infrastructure versions from 11.2.0.4 to 12.2.0.1, I has to open a service request at Oracle, and they furnished me the following workaround:

Once rot.sh has failed, we do not close the GUI installer windows because we will use it to complete the installation after the root.sh is complete, at first we have to deconfigure the failed installation:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid # . root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u00/app/grid/product/12.1.0.2/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/product/12.1.0.2/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/04/11 15:56:37 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
oracle_grid /u00/app/grid/product/12.1.0.2/grid/cdata/oracle_grid/backup_20170411_155653.olr 0
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracle_grid'
CRS-2673: Attempting to stop 'ora.evmd' on 'oracle_grid'
CRS-2677: Stop of 'ora.evmd' on 'oracle_grid' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracle_grid' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/04/11 15:57:09 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

After the root.sh is successfully completed, we continue with the Oracle Installer, and everything is correctly configured for the Oracle Grid Infrastructure.

I will keep you informed of the bug evolution, and I will test ASAP the Oracle Grid Infrastructure installation under SLES 12 SP2 …

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.

This looks like the DDL used to re-create the same table except that we can see two storage attributes that are not documented:

SEG_FILE and SEG_BLOCK

OBJNO_REUSE

SEG_FILE and SEG_BLOCK

When you create an empty table, you just provide the tablespace name and Oracle will allocate the first extent, with the segment header. You don’t choose the data placement within the tablespace. But here we are in a different case: the extents already exist in the datafiles that we transport, and the DDL must just map to it. This is why in this case the segment header file number and block number is specified. The remaining extent allocation information is stored within the datafiles (Locally Managed Tablespace), only the segment header must be known by the dictionary.

As an example, when I look at the database where the export comes from, I can see that the attributes for PK_EMP (SEG_FILE 26 SEG_BLOCK 154) are the relative file number and header block number of the PK_EMP segment:

This file identifier is a relative file number within the tablespace, which means that there is no need to change it when a tablespace is transported.

You will see exactly the same information in the database where you import the tablespace (except for HEADER_FILE which is the absolute file number).

OBJNO_REUSE

Each segment has a DATA_OBJECT_ID, which is referenced in each block, the ROWIDs. This must not change when we transport a tablespace because the goal is that nothing has to be modified in the datafiles. For this reason, the data object id is exported with the metadata, as we can see for this PK_EMP example (OBJNO_REUSE 73205), and set to the same in the target dictionary. Here are the data object IDs for the objects exported here:

The OBJECT_ID will be different in the target, assigned in the same way as when we create an object, but this one is not referenced anywhere within the datafiles.

So what?

Usually, the metadata precedes the data. With transportable tablespaces, it is the opposite: data is there and metadata is re-created to map the data. This metadata is what is stored into the dumpfile exported to transport tablespaces.
From what you have seen, you can understand now that the RELATIVE_FNO and the DATA_OBJECT_ID are not unique within a database, but only within a tablespace. You can understand also that Transportable Tablespace import duration does not depend on the size of data, but is proportional to the number of objects (metadata). This is where Pluggable Databases is more efficient: metadata is transported physically and import duration does not depend on the number of objects, especially when it does not involve an upgrade to new version and object recompilation.

The Optimizer Statistics Advisor is a new Advisor in Oracle 12.2.
The goal of this Advisor is to check the way you gather the statistics on your database, and depending on what is found, it will makes some recommendations on how you can improve the statistics gathering strategy in order to provide more efficient statistics to the CBO.
This Advisor is also able to generate remediation scripts to apply the statistics gathering “best practices”.
The recommendations are based on 23 predefined rules :
SQL> select rule_id, name, rule_type, description from v$stats_advisor_rules;

You can have a look at this blog if you want a little bit more informations about these rules.
If you want to exclude some rules or some database objects of the Advisor’s recommandation, you can define multiple filters. (I will do that below.)

Well, let’s see how to use the Advisor. The first step is to create a task which will run it :
DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.CREATE_ADVISOR_TASK(tname);
END;
/

Cool ! Nothing to report regarding statistics gathering on my the table JOC.T2 (see filter2 above).
But how does the Advisor reacts when I run it after having deleted the statistics on this table ?
SQL> exec dbms_stats.delete_table_stats(ownname=>'JOC',tabname=>'T2');

Recommendation: Gather Statistics on those objects with no statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
-------------------------------------------------------------------------------

It looks to work well. The Advisor detected that there is no stats on the table, and a rule were triggered.
And what about the remediation scripts ? Firstly, we have to generate them :
VARIABLE script CLOB
DECLARE
tname VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
:script := DBMS_STATS.SCRIPT_ADVISOR_TASK(tname);
END;
/

PL/SQL procedure successfully completed.

And then display them :
set linesize 3000
set long 500000
set pagesize 0
set longchunksize 100000
set serveroutput on

DECLARE
v_len NUMBER(10);
v_offset NUMBER(10) :=1;
v_amount NUMBER(10) :=10000;
BEGIN
v_len := DBMS_LOB.getlength(:script);
WHILE (v_offset < v_len)
LOOP
DBMS_OUTPUT.PUT_LINE(DBMS_LOB.SUBSTR(:script,v_amount,v_offset));
v_offset := v_offset + v_amount;
END LOOP;
END;
13 /
-- Script generated for the recommendations from execution EXEC_2182
-- in the statistics advisor task STAT_ADVISOR_1
-- Script version 12.2
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USECONCURRENT
-- Rule Description: Use Concurrent preference for Statistics Collection
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USEDEFAULTPREFERENCE
-- Rule Description: Use Default Preference for Stats Collection
-- Set global preferenes to default values.
-- Scripts for rule USEDEFAULTOBJECTPREFERENCE
-- Rule Description: Use Default Object Preference for statistics collection
-- Setting object-level preferences to default values
-- setting CASCADE to default value for object level preference
-- setting ESTIMATE_PERCENT to default value for object level preference
-- setting METHOD_OPT to default value for object level preference
-- setting GRANULARITY to default value for object level preference
-- setting NO_INVALIDATE to default value for object level preference
-- Scripts for rule USEINCREMENTAL
-- Rule Description: Statistics should be maintained incrementally when it is beneficial
-- Turn on the incremental option for those objects for which using incremental is helpful.
-- Scripts for rule UNLOCKNONVOLATILETABLE
-- Rule Description: Statistics for objects with non-volatile should not be locked
-- Unlock statistics for objects that are not volatile.
-- Scripts for rule LOCKVOLATILETABLE
-- Rule Description: Statistics for objects with volatile data should be locked
-- Lock statistics for volatile objects.
-- Scripts for rule NOTUSEINCREMENTAL
-- Rule Description: Statistics should not be maintained incrementally when it is not beneficial
-- Turn off incremental option for those objects for which using incremental is not helpful.
-- Scripts for rule USEAUTODEGREE
-- Rule Description: Use Auto Degree for statistics collection
-- Turn on auto degree for those objects for which using auto degree is helpful.
-- Scripts for rule AVOIDSTALESTATS
-- Rule Description: Avoid objects with stale or no statistics
-- Gather statistics for those objcts that are missing or have no statistics.
-- Scripts for rule MAINTAINSTATSCONSISTENCY
-- Rule Description: Statistics of dependent objects should be consistent-- Gather statistics for those objcts that are missing or have no statistics.
declare
obj_filter_list dbms_stats.ObjectTab;
obj_filter dbms_stats.ObjectElem;
obj_cnt number := 0;
begin
obj_filter_list := dbms_stats.ObjectTab();
obj_filter.ownname := 'JOC';
obj_filter.objtype := 'TABLE';
obj_filter.objname := 'T2';
obj_filter_list.extend();
obj_cnt := obj_cnt + 1;
obj_filter_list(obj_cnt) := obj_filter;
dbms_stats.gather_database_stats(
obj_filter_list=>obj_filter_list);
end;
/
PL/SQL procedure successfully completed.
SQL>

It was a very simple demo, but as you can see above, the Advisor provides a small script to adjust what is wrong or what is missing concerning the statistics of the table.

Conclusion :
Once you have upgraded your database to Oracle 12.2, don’t hesitate to set up the new Statistics Advisor. It is easy to deploy and can be fully personalized depending on what you want to check (which objects ? which rules ?). Moreover, it has been developped by the same team who develops and maintains the CBO. Therefore, they know which statistics the Optimizer needs !

In the last post we explained on how we did the setup for connecting a PostgreSQL server to the Pure Storage array. In this post we’ll actually give the array something to work with. What we use for the tests is pgbench which is included with PostgreSQL and can be used to load a database and then perform a TPC-B like performance test against the loaded data. Btw: pgbench is a great tool when you want to make the same tests against different configurations of PostgreSQL or the same configurations of PostgreSQL against different physical or virtual hardware configurations.

To begin with lets create a dedicated database for loading the data into:

We’ll use pgbench in initialization (“-i”) mode with a scale factor (“-s”) of 10000 for populating the database. The will create the pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers and will result in 1000000000 rows in total for the pgbench_accounts table.

It took more than double as long to load the 2000000000 rows but the limit is not on the storage. The average bandwidth went up from around 65 mb/sec to around 90 mb/sec. What is interesting to see is that we need only around 45GB real storage:

This means we have a almost 9 times compression/de-duplication on the storage layer. Remember that each database has a size of 146GB ( ( 146 * 2 = 292 ) / 9 = 32, plus the WAL files ):

In the next post we’ll do some pgbench standard benchmarks against the data set although the numbers will probably not be very good as we are running in a ESX test environment and this seems to be the bottle neck in our case. We’ll have to analyze the sar statistics for that, but no time yet.

What I can say about the storage system so far: I like the simplicity of the management console. There is not much more you can do than creating hosts, volumes and connections between them, that’s it. Reduction is happening automatically and you have the option to replicate to another storage system. Snapshots are available as well, but not yet tested.

If you are in multitenant, you probably already felt the desire to open the PDB$SEED in READ WRITE mode.

Can you open PDB$SEED read write yourseld? Yes and No.

Should you open PDB$SEED read write yourself? Yes and No.

How to run upgrade scripts that need to write to PDB$SEED? catcon.pl

In 12.1 you have no reason to open the seed read write yourself. In 12.2 there is one reason when you are in LOCAL UNDO mode, because you may want to customize the UNDO tablespace.

12c in local undo

I am in 12.1 or in 12.2 in shared undo mode:SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';

no rows selected

When the CDB is opened, the PDB$SEED is opened in read only mode.SYS@CDB$ROOT SQL> show pdbs
&nsbp;
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO
I try to open the PDB$SEED in read write mode (FORCE is a shortcut to avoid to close it before)SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open force;
&nsbp;
Error starting at line : 1 in command -
alter pluggable database pdb$seed open force
Error report -
ORA-65017: seed pluggable database may not be dropped or altered
65017. 00000 - "seed pluggable database may not be dropped or altered"
*Cause: User attempted to drop or alter the Seed pluggable database which is not allowed.
*Action: Specify a legal pluggable database name.
SYS@CDB$ROOT SQL>

Obviously, this is impossible and clearly documented. PDB$SEED is not a legal pluggable database for this operation.

Oracle Script

There is an exception to that: internal Oracle scripts need to run statements in the PDB$SEED. They run with “_oracle_script”=true where this operation is possible:

Of course, when upgrading, there are phases where you need the seed opened read-write. But you don’t to that yourself. The scripts to run in each container are called through catcon.pl which, by default, opens the seed read-write and ensures that the initial open mode is restored at the end even in case of error.
-m mode in which PDB$SEED should be opened; one of the following values
may be specified:
- UNCHANGED - leave PDB$SEED in whatever mode it is already open
- READ WRITE (default)
- READ ONLY
- UPGRADE
- DOWNGRADE

I have the following “/tmp/show_open_mode.sql” script
column name format a10
select name,open_mode,current_timestamp-open_time from v$containers;

When the pre-upgrade and post-upgrade scripts are run from DBUA you can see the following in the logs:exec_DB_script: opened Reader and Writer
exec_DB_script: executed connect / AS SYSDBA
exec_DB_script: executed alter session set "_oracle_script"=TRUE
/
exec_DB_script: executed alter pluggable database pdb$seed close immediate instances=all
/
exec_DB_script: executed alter pluggable database pdb$seed OPEN READ WRITE
/

This is displayed because DBUA runs catcon.pl in debug mode and you can do the same by adding ‘-g’ to the catcon.pl arguments.

12cR2 in local undo

In 12.2 there is a case where you can make a change to the PDB$SEED to customize the UNDO tablespace template. Here I am changing to LOCAL UNDO:

You can leave it like this, just close and re-open read only. If you want to keep the same undo tablespace name as before, you need to play with create and drop, and change undo_tablespace again.

So what?

Don’t forget that you should not modify or drop PDB$SEED. If you want a customized template for your PDB creations, then you should create your PDB template to clone. You can clone remotely, so this is possible in single-tenant as well. Being able to open the PDB$SEED in read write is possible only for the exception of creating the UNDO tablespace in PDB$SEED when you move to local undo mode. This is not required, and then an UNDO tablespace will be created when you open a PDB with no undo_tablespace.
When running pre-upgrade and post-upgrade scripts, then don’t worry: catcon.pl is there to help run scripts in containers and handles that for you.

A long time ago I blogged on how you can use the PostgreSQL development snapshots to test new PostgreSQL features before alpha/beta/rc releases are officially released. Another way to do this is to use git to get the latest sources and build PostgreSQL from there. Everything which was committed will be available to test. Btw: A great way to stay up to date is to subscribe to the mailing list just referenced. You’ll get a mail for each commit that happened, maybe one of those is getting your attention?

To start you’ll obviously need git. For distributions using yum this is just a matter of:

Not all of those packages are required, they just reflect what we usually install before building PostgreSQL from source. Of course you should adjust this and remove packages that are not required for what you plan to do.

Ready to test? Yes, but what? One possible way to start is asking git for what was committed recently:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] git log
commit 0de791ed760614991e7cb8a78fddd6874ea6919d
Author: Peter Eisentraut peter_e@gmx.net
Date: Wed May 3 21:25:01 2017 -0400
Fix cursor_to_xml in tableforest false mode
It only produced elements but no wrapping table element.
By contrast, cursor_to_xmlschema produced a schema that is now correct
but did not previously match the XML data produced by cursor_to_xml.
In passing, also fix a minor misunderstanding about moving cursors in
the tests related to this.
Reported-by: filip@jirsak.org
Based-on-patch-by: Thomas Munro thomas.munro@enterprisedb.com
...

Yesterday we got a Pure Storage All Flash Array for testing. As the name implies this is all about Flash storage. What makes Pure Storage different from other vendors is that you don’t buy just a storage box and then pay the usual maintenance costs but you pay for a storage subscription which should keep your storage up to date all the time. The promise is that all the components of the array get replaced by the then current versions over time without forcing you to re-buy. Check the link above for more details on the available subscriptions. This is the first post and describes the setup we did for connecting a PostgreSQL VMWare based machine to the Pure Storage box. The PostgreSQL server will be running as a virtual machine in VMWare ESX and connect over iSCSI to the storage system.

… and then restart the instance. When all is fine PostgreSQL will come up.

Finally to close this setup post here are some screenshots of the Pure Storage Management Web Console. The first one shows the “Storage” tab where you can see that the volume “volpgtest” is mapped to my host “pgpurestorage”.

The name you give the server is not important. The important information is the mapping of the “Host Port” which you can see here (this is the iSCSI IQN):

Once your server is connected you can see it in the connection map of the server in the console:

System health:

Last, but not least, here is the dashboard:

Not much traffic right now but we’ll be changing that in the next post.

The title sounds weird because Data Guard synchronisation is based on the redo stream, so it makes no sense to do nologging operations on the primary. And this is the reason why we set FORCE LOGGING on a Data Guard configuration. However, to lower the downtime of a migration done with Data Pump, you may want to import with minimal logging and then re-synchronize the standby. This post is about the re-synchronisation in 12.1

Nologging Data Pump

When you want to lower the downtime for a migration, you can disable force logging (alter database no force logging), and run impdp with the following: transform=disable_archive_logging:y
Don’t forget to re-enable force_logging at the end and to re-synchronize the standby.

If you are in 12.2 then it is half easy only. You can see where you have nonlogged blocks:RMAN> select file#,reason,count(*) from v$nonlogged_block group by file#,reason;
&bsp;
FILE# REASON COUNT(*)
---------- ------- ----------
5 UNKNOWN 158
6 UNKNOWN 159
7 UNKNOWN 336
8 UNKNOWN 94
9 UNKNOWN 16
10 UNKNOWN 14
and this is the right way to query them. If you use RMAN ‘report unrecoverable’ it will not display the datafiles that had nologging operations on the primary.

In 12.1 you can RESTORE FROM SERVICE to recover from the primary rather than from a backup. It is straightforward. I’m just writing this blog post in case you see the following when you try to do this because the message can be misinterpreted:

RMAN is clever enough: the data files are ok, according to their header and it skipped the restore. But you know that they are not ok, because some blocks are marked as corrupt because of nologging operations. Then what to do? There is a FORCE option in the restore command. But you probably don’t need it. If you get the previous message, it means that the datafiles are synchronized, which means that the APPLY is running. And, anyway, in order to restore you need to stop the APPLY.

DGMGRL> edit database orclb set state=apply-off;

Of course, once you stopped the apply, you run your RESTORE DATABASE FORCE. But you probably don’t need it. Now, the datafiles are stale and RMAN will not skip them even without the FORCE keyword.

When you see all datafiles skipped, that probably means that you didn’t stop the APPLY. With APPLY stopped, and you probably stop it before the import as you plan to restore the standby later, then you probably don’t need the FORCE command. However, I’ll always recommend using the FORCE in this case because RMAN will skip the files without looking at the unlogged blocks. Imagine that you put a tablespace in read-only after the non-logged import but before stopping the apply. Then this one will be skipped.

This post is mainly for self documentation as I have to search my old notes every time I want to do this (I am faster by searching on our blog ): If you want to monitor GoldenGate with Cloud Control using the GoldenGate plugin you’ll have to install the JAGENT on the host where GoldenGate is running on and the documentation is not quite clear on what you have to do exactly (at least when you want to use the silent installation). This is what you need to do for 12.2:

Download “Oracle GoldenGate Monitor 12.2.1.2.0″ from here. You need to have Java 8 installed (Oracle version, not openjdk) on the server you want to install the JAGENT on. You can download it from here.

Once you have Java 8 installed create a response file like this:

oracle@oelogg1:/var/tmp/ [rdbms12102] cat oggmon.rsp
#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0
[GENERIC]
#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/ogg/product/agent/12.2.1.2.0
INSTALL_TYPE=GoldenGate Monitor Agent Installation

I had a recent question about the following mention from the 12cR2 Multitenant book, about Partial PDB backups:.
Here is an example in 12.2 with local undo to illustrate the answer, which may help to understand what is a partial PDB backup.
Of course, since 12cR1 you can backup PDB individually, without the CDB$ROOT, in the same way you can backup only a few tablespaces subset of a CDB. It can be part of your backup strategy, but it is not to be considered as a backup that you can restore elsewhere later. A PDB is not self-consistent without the PDB$ROOT except if is has been closed and unplugged. In 12.1 you cannot restore a partial PDB backup if you don’t have the CDB$ROOT at the same point in time, because the recovery phase will need to rollback the ongoing transactions, and this requires to have the UNDO tablespace recovered at the same point in time.

However, in 12.2 with LOCAL UNDO, the partial PDB backup contains the local UNDO tablespace and then it can be sufficient to do a PDB Point In Time Recovery within the same CDB. And, in this case only, it is not required to have a backup of the root.

Thanks to LOCAL UNDO there is no need to restore the CDB$ROOT into an auxiliary instance, as it was the case for PDBPITR in 12.1 and then we can do PDBPITR without a backup of the root.

So what?

In theory, and as demonstrated above, including CDB$ROOT into a partial PDB backup is not mandatory in 12cR2 in local undo mode. However, keep in mind that this is for academic purpose only, not for real-life production. For short-term point in time, you will not use backups but flashback. For long-term restore, then you may have different reasons to restore the PDB elsewhere with its CDB$ROOT at the same point in time: some common objects (users, roles, directories, etc) may have changed. And anyway, your backup strategy should be at CDB level.

First question: Is there a version of GoldenGate which is able to extract from Oracle 8.1.7.4 on Solaris 8? Yes there is: p9271762_1040_SOLARIS64.zip. When you want to download that you’ll have to create a service request with Oracle Support because these old releases are password protected and you have to agree that you do not get any support for this.

The schema check script referenced here: Oracle GoldenGate database Complete Database Profile check script for Oracle DB (All Schemas) Classic Extract (Doc ID 1298562.1) has quite a few issues with 8.1.7.4 as it references columns in the dictionary that do not exist in this version of Oracle (e.g. the checks for compression on table and tablespace level) but you can still use the output and do some hand work.

PostgreSQL is one of the most used Rdbms.
In this blog we are going to talk about migrating from oracle to postgresql using ora2pg. A previous blog about this topic can be found here.
After installing ora2pg tool, we will see how to configure and to run it to migrate our data. We are using an oracle 12.1 database and a postgresql 9.6.2. The server is runing on OEL 7.2
The oracle database and the postgresql server are running on the same server. But different servers can be used.

Now that ora2pg is installed, we can proceed with the migration. The first step is to configure the ora2pg.conf file. We can do a copy of the default template and then modify the file. In our case the configuration file is located in /etc/ora2pg directory.
[root@serveroracle ora2pg]# cp ora2pg.conf.dist ora2pg.conf
[root@serveroracle ora2pg]# vi ora2pg.conf

ora2pg works by exporting and importing schemas. We can only export one schema at a time if we use the option SCHEMA. If we want to export all schemas we can just comment the option SCHEMA. In this case all non-oracle users will be extracted.
In the documentation we also have the option SYSUSERS# Allow to add a comma separated list of system user to exclude from
# from Oracle extraction. Oracle have many of them following the modules
# installed. By default it will suppress all object owned by the following
# system users:
# CTXSYS,DBSNMP,EXFSYS,LBACSYS,MDSYS,MGMT_VIEW,OLAPSYS,ORDDATA,OWBSYS,
# ORDPLUGINS,ORDSYS,OUTLN,SI_INFORMTN_SCHEMA,SYS,SYSMAN,SYSTEM,WK_TEST,
# WKSYS,WKPROXY,WMSYS,XDB,APEX_PUBLIC_USER,DIP,FLOWS_020100,FLOWS_030000,
# FLOWS_040100,FLOWS_FILES,MDDATA,ORACLE_OCM,SPATIAL_CSW_ADMIN_USR,
# SPATIAL_WFS_ADMIN_USR,XS$NULL,PERFSTAT,SQLTXPLAIN,DMSYS,TSMSYS,WKSYS,
# APEX_040200,DVSYS,OJVMSYS,GSMADMIN_INTERNAL,APPQOSSYS
# Other list of users set to this directive will be added to this list.
#SYSUSERS OE,HR

Once finished, a file HR_output.sql is generated. This file can be used to load data in the postgresql database. We can also load data directly without using a file. We just have to specify the connection info for the postgresql.
Let’s know prepare the postgresql server to receive our data. First we create the user HR.
[postgres@serveroracle ~]$ psql
psql (9.6.2 dbi services build)
Type "help" for help.
postgres=# \c orclpg
You are now connected to database "orclpg" as user "postgres".
orclpg=# create user HR WITH PASSWORD 'root';

And then we can execute the file. The first time we ran the file, we had some constraints errors
orclpg=# \i HR_output.sql
ERROR: insert or update on table "countries" violates foreign key constraint "countr_reg_fk"
DETAIL: Key (region_id)=(2) is not present in table "regions".
STATEMENT: COPY countries (country_id,country_name,region_id) FROM STDIN;
psql:HR_output.sql:224: ERROR: insert or update on table "countries" violates foreign key constraint "countr_reg_fk"
DETAIL: Key (region_id)=(2) is not present in table "regions".
orclpg=#

To correct this, we put the option in the configuration file DROP_FKEY to 1
DROP_FKEY 1

With this option all foreign keys will be dropped before all data import and recreate them at the end of the import. After the load was successful.
orclpg=# \i HR_output.sql
SET
CREATE SCHEMA
ALTER SCHEMA
SET
CREATE TABLE
COMMENT
SET
SET
SET
SET
SET
BEGIN
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
SET
COPY 107
SET
COPY 19
SET
COPY 10
SET
COPY 23
SET
COPY 4
ALTER TABLE
ALTER TABLE
ALTER TABLE
…
…
COMMIT