Pages

Thursday, March 20, 2014

When it comes to deploying a Windows guest as a virtual machine on Oracle's ODA, the documentation is lacking. After some research and trial and error, I was able to put together a process that will successfully install Windows.

The ODA utilizes the oakcli command line, so OVM is not an option. In this post, I will walk through the steps needed to prepare and deploy a Windows 7 virtual machine.

In the same staging directory, create a vm.cfg file with the following content. The "boot" parameter is set as "dc" which sets the CDROM first in the boot order. The "disk" parameter names the System.img file created above as the device "hda" in "read/write" mode. Before the Paravirtualized Drivers(PV) get installed, the "vif" parameter needs to be set with a type of "ioemu". Later, when the VM is started, a vnc session for the console will be started. The vnc port is set in the "vfb" parameter with a vncdisplay of 10. The 10 sets the port to 5910. A value of 1 would set it to 5901.

There are default repositories created as part of the ODA_BASE. The repos "odarepo1" and "odarepo2" are local repositories and will not allow VMs running from these repos to fail over to the other node. To avoid this, create a shared repository.

NOTE: Once you create the repo, the size cannot be changed, so plan accordingly.

Log on to ODA_BASE and run the following command to create a 150GB shared repository in the DATA disk group.

$ oakcli create repo odashr -size 150G -dg DATA

Now the environment is set up to import the template. Login to ODA_BASE as root and run the following command.

Copy the Windows installation iso file to a staging location on Dom0. Then modify the file /OVS/Repositories/odashr/VirtualMachines/vm_win7x64/vm.cfg on Dom0 by adding the iso as a CDROM. Below, the iso file /OVS/staging/X17-24281.iso has been defined as device "hdc" and made read only.

Now start the VM by logging into ODA_BASE and running the following command.

$ oakcli start vm vm_win7x64

Once the VM starts, log onto Dom0 and attach to the console with a VNC client using the vnc port defined above. In our example, the port is 5910. From the VNC console, you need to install Windows.

After the installation is complete, log into Dom0 and modify the /OVS/Repositories/odashr/VirtualMachines/vm_win7x64/vm.cfg file. Change the boot parameter from "dc" to "cd". This will name the boot device to the virtual hard drive rather than the CDROM.

Log into ODA_BASE and restart the VM.

$ oakcli stop vm vm_win7x64
$ oakcli start vm vm_win7x64

Now attach to your vnc session again and configure the network settings for the VM guest as per your requirements. Once the network is configured and functioning, you can download the Windows PV Drivers to the guest and install them by running the executable. As of the date of this post, the PV drivers can be downloaded from edelivery.oracle.com. Go to "Cloud Portal Oracle Linux/VM", and enter "Oracle VM" for the product pack and "x86 64 bit" for the platform.

After the PV drivers have been installed, modify the /OVS/Repositories/odashr/VirtualMachines/vm_win7x64/vm.cfg file by changing the "vif" parameter as follows.

Tuesday, February 18, 2014

There's nothing much more frustrating than knowing how to tune a query, but being unable to due to one or more third party views that cannot be modified. A great way to work around this is by using Oracle's global hints in the main query block. This is accomplished by referring to the view name and the table name with this format /*+ hint(view.table) */.

According to Oracle's documentation, however, the optimizer ignores global hints in this format that refer to multiple query blocks. So if you wanted to use the "leading" hint with more than one view like this /*+ leading(view1.table1 view2.table2) */, Oracle will not utilize your input, and you will be left banging your head against the wall.

Luckily, there is another format for referencing these query blocks, but it takes a little digging to get it right. In the following example, I will create a table with a view that references it along with another view on the ever famous "dual" table. These two views will be joined in a query and we will see what options there are for manipulating the execution plan without changing the views.

Now, if we want to force the optimizer to use a full table scan on the ORDERS table, we can throw in a global hint that references the view and the table. As you can see from the execution plan, the optimizer follows the directive and a full table scan is performed.

If we try the same approach with the "leading" hint which references more than one query block, Oracle is not so cooperative. After running this example, we see that the "leading" hint is ignored and DUAL is still the driving table.

In order to give the optimizer the information it needs in this situation, we need to change our "DBMS_XPLAN" script so it will include the "alias" and the "outline" information. In the "DBMS_XPLAN" script below, I have also eliminated the "predicate" and "note" data.

In the output above, the operations are numbered in the execution plan. These numbers correspond to the numbers identifying the query block and object alias. These two query blocks and aliases are found in the outline section with a leading hint. This leading hint can be modified by listing the ORDERS table first as shown below and using the ORDERS alias in the full table scan hint as well. The optimizer now accepts the hints and executes the query as directed.

Friday, February 7, 2014

If you are running Oracle on Linux, there is a handy little utility called logrotate that can be used to manage those unwieldy alert logs and listener logs.

I recently worked on a client's database, and their alert log had information going back four years. When trying to examine the log with vi, it would take several minutes to open the file due to it's enormity. To help them out, I used this simple little tool to keep the files in check.

As root, I simply created a new file in /etc/logrotate.d on each RAC node called ora_cleanup. The example below is from node 2. This will copy the alert log with a numerical extension and truncate the existing file. It performs this monthly and keeps 13 months worth of log information before deleting old files. The listener log grows much faster, so it is configured to rotate weekly and keep 53 weeks of files. The copytruncate parameter is important for the listener log because the listener process holds an open handle on the existing file.

Tuesday, January 28, 2014

One of my favorite 12c enhancement for GoldenGate is its new conflict detection resolution (CDR) feature for two way replication. In previous version, the replicat parameter files had to contain SQLEXEC commands that would query the target table before applying any DML. This added an additional call to the database and slowed performance.

CDR provides new parameters that simplify the detection and resolution of conflicting data. Here are some of the optional keywords that can be combined with the new RESOLVECONFLICT parameter.

These can be configured to regard one data source the master and always overwrite the other. They can utilize a timestamp column that determines the "winner" based on the most recent data. For data such as inventory, the "delta" can be used to make changes to both sites and adjust the value rather than simply replicating it.

GoldenGate requires the "before" images to be captured for all relevant columns in the source database. Here is how trandata can be configured for all columns in the SCOTT schema in the PDBORCL pluggable database.

In addition, the extract parameter files need the GETBEFORECOLS option as a part of the TABLE parameter. In the example below, the extract will capture all of the columns in the SCOTT.ITEMS table for each update and delete. The before image of each of these records will be loaded into the trail file.

The final step is to configure the replicat parameter files with the RESOLVECONFLICT paramter. In the example below, CDR is configured for the SCOTT.ITEMS table. If the record exists on the target for an update, the DML_TIMESTAMP column will be compared, and the record with the most recent time will "win" for all of the columns except QTY_ON_HAND. This column will be updated with the difference between the old and new record value on the source using the UPDATEROWEXISTS and USEDELTA keywords.

For example: If QTY_ON_HAND is updated on both databases in a bi-directional replication setup, the change needs to be reflected appropriately on each system. Let's say that the value on both system starts out at 22. The value on DB1 is decremented to 18, while the value on DB2 is incremented to 30. When the two updates pass each other and get applied on their respective targets, the records cannot simply be replicated as is. Simple arithmetic must be applied to both sides in order to account for the change in inventory on both sides. The "delta" for DB1 is -4. The "delta" for DB2 is +8. Therefore, the value on DB1 (18) will be incremented by 8 for a new total of 26. The value on DB2 (30) will be decremented by 4 for a new total of 26. Now, rather than having a "winner", both sources of data will be equal based on the "delta".

Wednesday, January 8, 2014

The Credential Store is a new 12c security feature in
GoldenGate that has been implemented as an autologin wallet in Oracle’s Credential
Store Framework. User IDs and passwords
are encrypted in the store and, as a result, an encryption key in the
connection string is no longer needed.

The default location of the store is in the ./dircrd
directory of the GoldenGate software home.
If you want to change the location, you can edit the ./GLOBALS file with
the following CREDENTIALSTORELOCATION parameter.

Once the credential store has been created, users and
password can be added to it. One of the
key features of the store is the use of domains which can be used to logically
group login aliases. The same alias can
be defined in different domains with different credentials. This can be handy when developing and testing
in different database environments from the same GoldenGate installation. The default domain is “Oracle GoldenGate”.

In this example, the user c##ggsadmin is added to the store
in the “test” domain. If the “password”
keyword is omitted, GoldenGate will prompt for the password and hide it from
the output.

If you want to see the information maintained in the store,
you can use the INFO CREDENTIONSTORE command.
If you don’t specify the domain, it will default to “Oracle
GoldenGate”. As you can see below, the
default domain is still empty.

Tuesday, December 24, 2013

With the new features of GoldenGate 12c and the architectural changes brought on by the multitenant databases, there also come some new requirements for configuring replication.

In order to create an extract process for a multitenant database, it must be created at the root container level with a "common" database user and must be defined to run in the "integrated" capture mode. Replicats, on the other hand, must be created at the pluggable database level and can be defined to run in either the "classic" or "integrated" modes.

Below, I will step through the configuration of the databases, extract, and replicat for a multitenent environment. In this example, there are two databases sharing a host, so there is no pump and there is only one GoldenGate environment.

GoldenGate

Version - 12.1.2.0

Source Database

Version - 12.1.0.1

Root SID - orcl

Pluggable Databases - pdborcl, pdb2orcl

Target Database

Version - 12.1.0.1

Root SID - orcl2

Pluggable Databases - pdborcl2, pdb2orcl2

Preparing the source database includes the creation of a "common" user, adding supplemental logging at the database level, enabling flashback query, and properly setting the streams_pool_size init parameter.

On the source, create the extract parameter file ./dirprm/e1aa.prm. Notice that the TABLE parameter must include the container name along with the schema. Alternatively, the SOURCECATALOG parameter may be utilized. Examples of both are in this file. If configuring an integrated replicat, use the required LOGALLSUPCOLS parameter in the extract to capture the before and after values of the primary key, unique indexes, and foreign keys.

On the source, start GGSCI and add supplemental logging for the objects to be replicated, create and register the extract, and create the trail file. The name of the container must precede the schema name.

On the target, create the replicat parameter file ./dirprm/r2aa.prm. There is no need to create a checkpoint table for a replicat in "integrated" mode. Again, the MAP parameter must include the source container name. This may also be accomplished with the SOURCECATALOG parameter. Examples of both are in this file.

Thursday, December 19, 2013

As you know, Oracle bought GoldenGate four years ago. In its new 12c release, Oracle has now integrated the product further into its standards by offering an installation option using the OUI.

For those of you who have installed earlier versions, you're probably thinking that there's not much to the old install of unzipping a file and creating the subdirs. I agree, but it makes sense why Oracle is moving in this direction. As you'll see, the installation process updates the Oracle Inventory and there is a new OPatch directory allowing for a standard method of patching.

The documentation specifies that this version of the OUI does not support upgrades, so you'll have to revert to the old method if that is your current situation.

Here are the steps to installing GoldenGate with the new OUI.

Once you have downloaded the software, unzip it in a temporary location. Don't unzip it in the GoldenGate home as you would with the previous versions.

For my environment, the file name is 121200_fbo_ggs_Linux_x64_shiphome.zip, and it creates the subdirectory fbo_ggs_Linux_x64_shiphome.

$ cd fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller

Select the version of Oracle that the capture and/or apply process will be running against.

Enter the GoldenGate software home and the location of the database home that GoldenGate will be operating against. You can also check whether or not you want the manager process to be started and customize the desired port.

Review the options and click "Install"

And that's it. The installation is complete. The subdirs have been created and the manager process has been started.

If you navigate to your new GoldenGate home that you specified above and list the contents, you'll see the OPatch directory along with all of the subdirs.

Additionally, you can navigate to the OPatch directory and list the Oracle inventory as follows.