Comparing the sizes they were different sizes 200mb secondary for 600mb on primary.

Time to recreate the the standby redo logs with the correct size matching the primary. I pull the standby redo log scripts dynamically on the primary to drop and then recreate the log files. Again John Hallas does a really good job providing this information.

Post navigation

Recently encountered this error after a roll out in production. Several hundred procedures were invalid on the logical standby. I was unable to recompile and received an ORA-16224 Database Guard is enabled error.

I checked the guard_status in v$database. Usually this error is thrown when it is set to ALL, however, in this case it was set to Standby. I was interested in disabling the guard which could be done with:

alter database guard none; < other options include standby, which it was already set to and all, which wouldn’t have worked>

However, this can also be set at the session level:

alter session disable guard;

That turns it off for my session. I wiped up a quick anonymous block to recompile all invalid procedures.

declare

sSQL varchar2(4000);

begin

for i in (select object_name from

dba_objects where owner = ‘MY_SCHEMA’

and status = ‘INVALID’

and object_type = ‘PROCEDURE’)

loop

sSQL:=’alter procedure my_schema.’||i.object_name||’ compile’;

execute immediate sSQL;

end loop;

end;

/

Once that executed successfully and just turned the guard status back to standby in my session:

alter session enable guard;

Had the end user attempt to run report again. Success.

Post navigation

Seems today is the day for logical standby issues. Two tables received a conflict, which we have suffered with previously and applied a patch. It’s interesting we started receiving this again, however, I’m going to skip to how I resolved the immediate issue syncing the logical objects and allowing the SQL Apply to continue. Will need a MOS for the other issue.

I started with trying to use DBMS_LOGSTDBY.INSTANTIATE_TABLE, which is nice and convenient since it uses a dblink — no moving files around.

Post navigation

Recently I’ve been playing Tetris with my databases. Duplicating databases and reconciling differences due to a change management system with significant gaps. During one of these fun moments I ended up with a database using a SPFILE, but the SPFILE didn’t actually exist. Simple fix:

create pfile from memory;

All was well as it saved me from having to re-create the pfile using the alert.log. The SPFILE can be created this way as well provided you are using a pfile:

create spfile from memory;

Every minute I can save with little tidbits such as the above is precious.

Post navigation

I recently ran into several issues while installing the 11.2.0.2 GI. Most of these issues were due to the admins designing the system that caused several installs and uninstalls. On the final install everything appeared to be in order until I tried to start ASMCA to add diskgroups. I received the error message an earlier version of ASM was running and in order to upgrade I needed to start ASMCA for the earlier version. I only ever had one version installed so this was a faulty error message.

After reviewing the system I decided to relink the GI binaries:

As ROOT:

cd $GID_HOME/crs/install

perl rootcrs.pl -unlock — this stops the clusterware and sets the permissions for root

Post navigation

My phone started going crazy with alerts from EM12c, apparently all of my agents had lost communication with OMS. I immediately started checking all of the logs on OMS, not finding anything of interest I decided to try an upload from one of the host:

Post navigation

It never fails, every time I have a quick request that I should be able to hammer out in a few minutes something hampers it. Today is just one of those days. I needed to perform an expdp/impdp on a small schema between development and testing and I received an error:

ORA-31634 Unable To Construct Unique Job Name By Default

I generally always default the job name on data pumps exports especially if they are one offs. However, today this just didn’t cooperate. Apparently we have experienced some failures with other exports running and the tables still exist. There’s a limit of 99. An easy check:

select owner_name,state,job_name from dba_datapump_jobs;

If the number of records returned equal 99 then simple delete the tables. SYS_EXPORT_SCHEMASxx; — replacing the xx with the number displayed in the output from above.