All About Oracle...

Disclaimer : All data and information provided on this site is for informational purposes only. All Username,password and servername or Ip addresses used in oracleabout.blogspot.com are virtual and does not affiliate with any Company or Organization.

This is Bug 19132065 - Oracle Linux semtimedop() wakeups by timeout are lagging causing offload operations to fail (which may degrade performance) and errors similar to one or more of the following:
? ORA-700 [Offload issue job timed out]
? ORA-700 [Offload group not open]
? RS-700 [Celloflsrv hang detected. It will be terminated]

This bug affects related to 12.1.1.1. storage Version.
It is due to DB Node RCU delayed and cause Offload job to fail on Cellservices .
it affects database performance not availability.
Error ocure mostly when cellserv tried to do Read optimization.
reducing Delay in RCU is work around accross whole stack.

This workaround is automatically applied in the following cases:
When a new system is deployed with Exadata 11.2.3.3.1 or 12.1.1.1.1 using OEDA Sep 2014 or later.
When storage servers are upgraded to 11.2.3.3.1 or 12.1.1.1.1 and the patchmgr plugins patch is properly staged before running patchmgr, as documented.
When database servers are upgraded to 11.2.3.3.1 or 12.1.1.1.1 using dbnodeupdate.sh v3.58 or later.

If the prerequisite checks pass, then start patch application. Use -rolling option if you plan to use rolling updates. Use the -ignore_alerts option to ignore any open hardware alerts on the cells, and continue. Use the -smtp_from, -smtp_to options to set an e-mail address to receive patchmgr alert messages, and continue.

•Always run ExaPatch from the PSU bundle directory (where the exapatch_descriptor.py file is located) using the full path to exapatch from any compute node within the rack.
•Do not run ExaPatch directly on the compute node being patched.
•ExaPatch patches the NM2-GW and NM2-36P switches one switch at a time
•(rolling upgrade). The switch running the master subnet manager is
•patched after patching all the non-master switches.
•The upgrade procedure supports upgrading the compute nodes one node at a time (rolling upgrade). Upgrading one node at a time ensures that the hosted services and applications are not disrupted.
•Oracle recommends that these patches be applied to a test or a nonproduction system before it is applied to the production system. The total time taken for patching the test system can be used as a baseline for scheduling the maintenance windows to patch the production system.
•Perform the patching by following the steps exactly as documented in this readme.

--------------------------------------------
1 Set up the PSU
--------------------------------------------

~~~~~~~~

1.Log in to the compute node as root.2.cd /exalogic-lcdata/patches3.Add execute permissions for psuSetup.sh using the chmod command. In the following example, execute permissions for all users is added: # chmod a+x psuSetup.sh4. Run the script:./psuSetup.sh ZFS_IP_Address [--mountonly] [--unmountonly] [--remount] [--verbose] [--force ] [--help][root@DummyCN01 patches]# ./psuSetup.sh 1.1.1.10INFO: Pre-requiste Check...INFO: Checking for Python version...INFO: Python version check... succeededINFO: Checking for Root permissions...INFO: /exalogic-lcdata is already mounted from 1.1.1.10INFO: /exalogic-lctools is already mounted from 1.1.1.10INFO: Extracting PSU Bundle data to /exalogic-lcdata. This will take a few minutesExaBR 1.1 (build 5951)ExaBR 1.1 (build 5951)INFO: /exalogic-lctools Version: 14.1 and expatch Version: 1.2.1 is already installedINFO: Installation complete.INFO: ExaPatch is installed in /exalogic-lctools/bin/exapatchINFO: PSU is installed in /exalogic-lcdata/patches/Virtual/18178980/
#####

The following prerequisites must be fulfilled on all Exalogic Control vServers before they can be upgraded to version 12.1.4 b2500

•When updating the Exalogic Control services, ExaPatch must be run on a compute node with TCP/IP access to all Exalogic Control vServers.
•All the Exalogic Control vServers must be running. Verify that access to vServer-EC-OVMM and to the two vServer-EC-EMOC-PC is OK by running
the following ExaPatch command:
[root@compute-node]# /exalogic-lctools/bin/exapatch -a checkAuthentication
•Log in to the Exalogic Control BUI and make sure that assets(switches storage) are managed by a single ProxyController(PC) at a time. If any of the assets appear to be managed by both the ProxyControllers, refer
to the Troubleshooting section.
•Back up the Exalogic Control Stack using the ExaBR tool, as described in Section 4.1 of Oracle Exalogic Elastic Cloud Backup and Recovery Guide Using ExaBR

---Verify that only one target is displayed by the sessions command, as shown in the following example:[root@nm2gw-ib01 ~]# spsh-> show /SP/sessions/SP/sessionsTargets:120350 (current)Properties:Commands:cdshow
----Edit the timeout for the ILOM session: -> set /SP/cli timeout=1 Set 'timeout' to '1'

-----Compute nodes can be patched in the following ways:
•Rolling: Patch one node at a time
oThis method applies on one node at a time, patching it. With this method, only one node is being patched at any point in time and the other nodes can continue to provide services, but the patching process takes more time.
•Parallel: Patch multiple nodes simultaneouly
oThis method applies compute node patches/updates across multiple nodes in parallel. Using ExaPatch, you can patch a subset of nodes at a time or patch all the nodes in the Exalogic rack.
•It is recommended that you patch one compute node, verify that everything works as expected, and then attempt to patch multiple compute nodes in parallel.

-----Pre-requiste

Before upgrading:
•Ensure that the compute node base image is at v2.0.6.1.0.
•There is no need to backup the compute nodes. If there is a need to restore a compute node, install the base image and use the Exalogic Configuration Utility (ECU) to configure it.
•Ensure that at least 80 MB of free space exists in the root (/) partition. You can free up some disk space by running yum clean all or by deleting files that are not longer needed in the /tmp directory. Do not delete files in the directories: /var/log/xen, /var/tmp/exalogic or /var/tmp/ebi_conf.pre20611.bak. In the /var/log directory, do not delete ExaPatch log files or the file called ebi_20611.log.

------Setup Patch on all Nodes

Transfer psuSetup.sh from the downloaded location to this machine:
[root@compute-node2]# scp root@compute-node1:~/psuSetup.sh .

Run the psuSetup.sh while specifying ZFS active head IPoIB address and with --mountonly option:
[root@compute-node2]# ./psuSetup.sh 1.1.555.120 --mountonly
First upgrade compute Node one of the Node.

----Verify upgrade.
This process takes less than 5 minutes. The main update to compute node is ksplice and version update. This does not cause a reboot on the compute node being upgraded. You may connect to the serial console through ILOM and monitor the upgrade process.
After all the compute nodes, other than the one on which ExaPatch is running, have been upgraded, log in to the second compute node (cn02) and run ExaPatch to upgrade the first (cn01) compute node.
Verify the new compute node base image version in the ExaPatch output:

[root@compute-node1]# /exalogic-lctools/bin/exapatch -a patch ectemplates
Patching each vServer template can take at least 25 minutes. The total duration for this step is at least 75 minutes. During the patching process, the progress can be tracked by using xm console to log in to the console of the vServer being patched. If you are tracking the progress, after each vServer reboot you should run xm console again to reconnect to the vServer console.

Below code shows How to filter data from SOURCE Table using Lookup table and get filter value dynamically .

This code check If Old value ORG_ID for source_table is exist in LOOKUP_PARAMETER_TAB.
to filter using lookup table ,I used FILTER and get lookup value from SQLEXEC.if value match it execute another SQLEXEC where it
execute stored procedure and Get dynamic parameters using @GETVAL function and pass to Procedure.