Pythian - Data Experts Blog » Rene Antunezhttp://www.pythian.com/blog
Official Pythian Blog - Love Your DataSat, 01 Aug 2015 00:28:28 +0000en-UShourly1EM12c : Login to GUI with the correct password causes authentication failurehttp://www.pythian.com/blog/em12c-login-to-gui-with-the-correct-password-causes-authentication-failure/
http://www.pythian.com/blog/em12c-login-to-gui-with-the-correct-password-causes-authentication-failure/#commentsThu, 21 May 2015 22:47:34 +0000http://www.pythian.com/blog/?p=73575So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error: “Authentication failed. If problem persists, contact your system administrator” I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the...

I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.

SQL&gt; connect sysman/
Enter password:
Connected.

So I went to the<gc_inst>/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error

2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 492
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 406
ORA-06512: at line 1

Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:

Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

]]>http://www.pythian.com/blog/em12c-login-to-gui-with-the-correct-password-causes-authentication-failure/feed/0Quick Tip : Oracle User Ulimit Doesn’t Reflect Value on /etc/security/limits.confhttp://www.pythian.com/blog/quick-tip-oracle-user-ulimit-doesnt-reflect-value-on-etcsecuritylimits-conf/
http://www.pythian.com/blog/quick-tip-oracle-user-ulimit-doesnt-reflect-value-on-etcsecuritylimits-conf/#commentsMon, 04 May 2015 17:21:47 +0000http://www.pythian.com/blog/?p=73195So the other day I was trying to do a fresh installation of a new Oracle EM12cR4 in a local VM, and as I was doing it with the DB 12c, I decided to use the Oracle preinstall RPM to ease my installation of the OMS repository database. Also I was doing both the repository...

]]>So the other day I was trying to do a fresh installation of a new Oracle EM12cR4 in a local VM, and as I was doing it with the DB 12c, I decided to use the Oracle preinstall RPM to ease my installation of the OMS repository database. Also I was doing both the repository and EM12c OMS install in the same VM, that is important to know.

[root@em12cr4 ~]# yum install oracle-rdbms-server-12cR1-preinstall -y

I was able to install the DB without any issues, but when I was trying to do the installation of EM12cR4, an error in the pre-requisites popped up:

WARNING: Limit of open file descriptors is found to be 1024.

For proper functioning of OMS, please set “ulimit -n” to be at least 4096.

And if I checked the soft limit for the user processes , it was set to 1024:

oracle@em12cr4.localdomain [emrep] ulimit -n
1024

So if you have been working with Oracle DBs for a while you know that this has to be checked and modified in/etc/security/limits.conf , but it was my surprise that the limit has been set correctly for the oracle user to at least 4096:

So my next train of thought was to verify the user bash profile settings, as if the ulimits are set there, it can override the limits.conf, but again it was to my surprise that there was nothing in there, and that is were I was perplexed:

So what I did next was open a root terminal and do a trace of the login of the Oracle user:

[root@em12cr4 ~]# strace -o loglimit su - oracle

And in another terminal was verify what was the user reading regarding the user limits, and this is where I hit the jackpot. I was able to see here that it was reading the pam_limits.so and the /etc/security/limits.conf as it should, but it was also reading another configuration file called oracle-rdbms-server-12cR1-preinstall.conf, (Does this look familiar to you ? :) ) and as you can see the RLIMIT_NOFILE was being set to 1024:

So I went ahead and checked the file /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf and evidently, that is where the limit was set to 1024, so the only thing I did was change the value there to 4096:

Once I did that change, and logged out and logged back in, I was able to see the values that I had set in the first place in /etc/security/limits.conf and now I was able to proceed with the installation of EM12cR4:

oracle@em12cr4.localdomain [emrep] ulimit -n
4096

Conclusion

So when you install the RPM oracle-rdbms-server-12cR1-preinstall, be sure that if you are to change any future user limits, there might be another configuration file that can be setting other values than the ones desired and set in /etc/security/limits.conf

]]>http://www.pythian.com/blog/quick-tip-oracle-user-ulimit-doesnt-reflect-value-on-etcsecuritylimits-conf/feed/1Cassandra 101 : Understanding What Cassandra Ishttp://www.pythian.com/blog/cassandra-101-understanding-what-cassandra-is/
http://www.pythian.com/blog/cassandra-101-understanding-what-cassandra-is/#commentsMon, 16 Mar 2015 13:35:35 +0000http://www.pythian.com/blog/?p=72469As some of you may know, in my current role at Pythian I am tackling OSDB, and Cassandra is on my radar. One of the things I have been trying to do is learn what Cassandra is, so in this post, I’m going to share a bit of what I have been able to learn....

]]>As some of you may know, in my current role at Pythian I am tackling OSDB, and Cassandra is on my radar. One of the things I have been trying to do is learn what Cassandra is, so in this post, I’m going to share a bit of what I have been able to learn.

According to the whitepaper “Solving Big Data Challenges for Enterprise Application Performance Management” , Cassandra is a “distributed key value store developed at Facebook. It was designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service without single point of failure allowing replication even across multiple data centers as well as for choosing between synchronous or asynchronous replication for each update.”

Cassandra, in layman’s terms, is a NoSQL database developed in JavaOne. One of Cassandra’s many benefits is that it’s an open source DB with deep developer support. It is also a fully distributed DB, meaning that there is no master DB (unlike Oracle or MySQL) so this allows this database to have no point of failure. It also touts being linearly scalable, meaning that if you have 2 nodes and a throughput of 100,000 transactions per second, and you added 2 more nodes, you would now get 200,000 transactions per second, and so forth.

Cassandra is based on 2 core technologies, Google’s Big Table and Amazon’s Dynamo. Facebook uses the latter to power their Inbox Search feature. It was released as an open source project on Google Code and then incubated at Apache, nowadays, Cassandra is considered a Top-Level-Project. Currently there are 2 versions of Cassandra:

Since Cassandra is a distributed system, it follows the CAP Theorem, which is awesomely explained here, and it states that, in a distributed system, you can only have two out of the following three guarantees across a write/read pair:

Consistency.- A read is guaranteed to return the most recent write for a given client.

Availability.-A non-failing node will return a reasonable response within a reasonable amount of time (no error or timeout).

Partition Tolerance.-The system will continue to function when network partitions occur.

Also Cassandra is a BASE (Basically Available, Soft state, Eventually consistent) type system, not an ACID (Atomicity, Consistency, Isolation, Durability) type system, meaning that the system is optimistic and accepts that the database consistency will be in a state of flux, not like ACID which is pessimistic and it forces consistency at the end of every transaction.

Cassandra stores data according to the column family data model where:

Keyspace is the container for your application data, similar to a schema in a relational database. Keyspaces are used to group column families together. Typically, a cluster has one keyspace per application. It also defines the replication strategy and data objects belong to a single keyspace.

Column Family is a set of one, two, or more individual rows with a similar structure.

Row is a collection of sorted columns, it is the the smallest unit that stores related data in Cassandra, and any component of a Row can store data or metadata.

Row Key uniquely identifies a row in a column family.

Column key uniquely identifies a column value in a row.

Column value stores one value or a collection of values.

Also we need to understand the basic architecture of Cassandra, which has the following key structures:

Node is one Cassandra instance and is the basic infrastructure component in Cassandra. Cassandra assigns data to nodes in the cluster; each node is assigned a part of the database based on the Row Key. Usually corresponds to a host, but not necessarily, especially in Dev or Test environments.

Rack is a logical set of nodes.

Data Center is a logical set of Racks, a data center can be a physical data center or virtual data center. Replication is set by data center.

Cluster contains one or more data centers and is the full set of nodes which map to a single complete token ring.

Conclusion

Hopefully this will help you understand the basic Cassandra concepts. In the next post, I will go over the architecture concepts of what a Seed node is, the purpose of the Snitch and topologies, the Coordinator node, replication factors, etc.

]]>http://www.pythian.com/blog/cassandra-101-understanding-what-cassandra-is/feed/0MySQL: Troubleshooting an Instance for Beginnershttp://www.pythian.com/blog/mysql-troubleshooting-an-instance-for-beginners/
http://www.pythian.com/blog/mysql-troubleshooting-an-instance-for-beginners/#commentsWed, 22 Oct 2014 15:17:50 +0000http://www.pythian.com/blog/?p=69737So as you may know, my new position involves the MySQL world, so I’m in the task of picking up the language and whereabouts of this DBMS, and my teamate Alkin Tezuysal (@ask_dba on Twitter) has a very cool break and fix lab which you should check out if you are going to Percona Live London...

So as you may know, my new position involves the MySQL world, so I’m in the task of picking up the language and whereabouts of this DBMS, and my teamate Alkin Tezuysal (@ask_dba on Twitter) has a very cool break and fix lab which you should check out if you are going to Percona Live London 2014, he will be running this lab, so be sure to don’t miss out.

So the first thing I tried was to bring up the service, but to my surprise, the MySQL user didn’t exist. So the first thing I did was create the user.

So now that the user exists, I try to bring it up and we are back at square one as the initial configuration variable in the .cnf file is incorrect. But there is another problem, as there is more than one .cnf file.

Another try, but again the same result — but even worse this time, as there is no output. After digging around, I found that the place to look is the /var/log/mysqld.log and the problem was that some libraries belonged to root user, instead of the MySQL user.

So I think, yey, I’m set and it will come up! I give it one more shot and, you guessed it, same result and different error :( This time around the problem seemed to be that the memory assigned is incorrect and we don’t have enough on the machine, so we change it.

Now, I wasn’t even expecting the service to come up, but to my surprise it came up!

[root@ip-10-10-10-1 ~]# service mysqld start
Starting mysqld: [ OK ]

So now, what I wanted to do was connect and start working, but again, there was another error! I saw that it was related to the socket file mysql.sock, so I changed it to the correct value in our .cnf file

As you can see, there are different ways to troubleshoot the startup of a MySQL instance, so hope this helps you out in your journey when you are starting to use this DBMS and also if you know of another way, let me know in the comment section below.

Please note that this blog post was originally published on my personal blog.

]]>http://www.pythian.com/blog/mysql-troubleshooting-an-instance-for-beginners/feed/0RMAN 12c : Say goodbye to your backup when dropping your PDBhttp://www.pythian.com/blog/rman-12c-say-goodbye-to-your-backup-when-dropping-your-pdb/
http://www.pythian.com/blog/rman-12c-say-goodbye-to-your-backup-when-dropping-your-pdb/#commentsFri, 14 Feb 2014 20:46:25 +0000http://www.pythian.com/blog/?p=64163Update: 09/March/2015 .- I wrote a second part and followed up on this on my personal blog RMAN 12cR1 : Say goodbye to your backup when dropping your PDB – Part II I was working on my presentations for IOUG Collaborate, and I came upon this strange behaviour in RMAN 12c (12.1.0.1.0) which to me, shouldn’t...

I was working on my presentations for IOUG Collaborate, and I came upon this strange behaviour in RMAN 12c (12.1.0.1.0) which to me, shouldn’t happen. Seems that when you do a DROP PLUGGABLE DATABASE , it is the equivalent of DROP DATABASE INCLUDING BACKUPS. This means that if you need to restore your PDB later on, you won’t have this registered – just be careful when dropping them.

Here we go: So I took a backup of my CDB and all of its PDBs, and kept an eye on this TAG 20140212T191237 (I removed a couple of lines for better reading)

As you can see, I was able to restore and recover my PDB without a problem. But what happens if I decide to drop my PDB, and later on decided that the PDB was needed? So I tried to go back to my backup, it will no longer be there, and it doesn’t report on the backup tag 20140212T191237

As you can see, that backup is no longer registered. I still don’t know if this is normal behaviour for PDBs backup, or a bug – but for now just be careful when dropping a PDB. Your backup will not be reliable. Scary stuff isn’t it ?

]]>http://www.pythian.com/blog/rman-12c-say-goodbye-to-your-backup-when-dropping-your-pdb/feed/7How to Apply a Standby-First PSU Patch in a 2 node RAC environmenthttp://www.pythian.com/blog/how-to-apply-a-standby-first-psu-patch-in-a-2-node-rac-environment/
http://www.pythian.com/blog/how-to-apply-a-standby-first-psu-patch-in-a-2-node-rac-environment/#commentsTue, 24 Dec 2013 16:52:34 +0000http://www.pythian.com/blog/?p=62559Oracle provides since it’s 11.2.0.1 version a way to apply certain patches, to our standby environment first, without compromising the primary database, to allow to let it burn in the Standby for the time you deem appropriate and then apply the patch binaries to the primary BD ORACLE_HOME and these will cascade into our Standby DB. The first...

]]>Oracle provides since it’s 11.2.0.1 version a way to apply certain patches, to our standby environment first, without compromising the primary database, to allow to let it burn in the Standby for the time you deem appropriate and then apply the patch binaries to the primary BD ORACLE_HOME and these will cascade into our Standby DB.

The first thing you have to make sure before applying a patch like this, is that it’s certified in the MOS Patch note as “Standby-First”.

For this, I have the patch binaries in /u01/app/oracle/patches/11.2.0.3/PSUOct2013, and the databases that I will apply the patches are the primary DB TEST with oracleenespanol1/oracleenespanol2 nodes, and Standby TESTSTBY with oracleenespanol3 and oracleenespanol4 nodes. The PSU patch that I will apply is 17272731 which is 11.2.0.3.8.

Let’s start this long post in the Standby environment, so please be patient. The first thing I recommend is that you take a backup of your binaries, in a given case that there is an error, we can return to these without much problems. This is run as root. Similarly I recommend that before you start, to have the necessary backups of your database.

To continue, you must create as the oracle user, an OCM response file, this does not mean that we are going to install it, but this is necessary for a PSU. And remember that our directory where we have the patch binaries are shared, so we only have do it once.

I like to ensure that all my services are up or even down on my RAC environment before making any changes, so that I can crosscheck when I finish. I use this script called crs_status.sh to verify and I run with my user grid, all you have to change the value of CRS_HOME, hopefully this will also serve you.

The only thing I would do is to use again the crs_status.sh script to check my RAC services are up/down. For this type of application, you have to know that there is no extra step in the Standby, as the catbundle will be applied in the primary BD not in Standby. As you could see, it is a long process, because we have just finished our standby environment, we will now proceed to our primary environment. Here is where you will wait the time that you deem necessary to see if there are any errors that can impact your primary database when you apply it there or make you rollback the patch.

Now we will move to the primary environment.

For the primary, I’m not going to go over the same steps that had to be done standby environments (oracleenespanol3/oracleenespanol4) that you will also have to make in the primary environment (oracleenespanol1/oracleenespanol2) , but I will do a summary of what has to be done in the primary nodes

Take a backup of your GI_HOME/ORACLE_HOME and inventory

Verify you have a valid backup of your Primary DB

Run against your binaries in your primary environment opatch prereq CheckConflictAgainstOHWithDetail as we did with the standby environment.

Make sure that you have your ocm.rsp file created

Verify which services are up/down in your RAC environment

Once you’ve done the steps above, this is where the process of implementing the patch changes.

We verify again as we did in the standby environment with the opatch lsinventory that the patch 11.2.0.3.8 is applied properly in our binaries and with the script crs_status.sh we crosscheck which services are up/down in our primary environment.

What we have to do next is run the catbundle.sql as the oracle user from $ORACLE_HOME/rdbms/admin in our primary database, this has to run on one node only, not both.

At this point, we’re done with the primary database and the primary nodes. In oracleenespanol3/oracleenespanol4 servers (Standby), let’s put TESTSTBY recovery more. In my case I’m using Active Dataguard, just be careful with that, because if you are not using it, is an extra licensable option for your Standby DB.

And now there is nothing more to do, but to verify that all is well. Let’s check that the last ARCHIVED REDO LOG is applied, which was 82197, and the registry history of your Standby database and you’ll see that you’ll have the version 11.2.0.3.8.

This is very useful for maintaining high availability in your environment while allowing you to verify that the patch that you apply does not harm your primary database. LEt me know if you had a use-case for this type of patching and if you think this is useful to you.

]]>http://www.pythian.com/blog/how-to-apply-a-standby-first-psu-patch-in-a-2-node-rac-environment/feed/6How to Fix a Target with a Pending Status in OEM 12cR2http://www.pythian.com/blog/how-to-fix-a-target-with-a-pending-status-in-oem-12cr2/
http://www.pythian.com/blog/how-to-fix-a-target-with-a-pending-status-in-oem-12cr2/#commentsThu, 29 Aug 2013 13:47:59 +0000http://www.pythian.com/blog/?p=58823The other day we had a situation with an OEM 12cR2. We performed maintenance in a 2-node cluster and both instances came up fine, but one of the instances in OEM's Cluster Database target was showing a pending status. It goes to say that before the maintenance, both showed as up.

]]>The other day we had a situation with an OEM 12cR2. We performed maintenance in a 2-node cluster and both instances came up fine, but one of the instances in OEM’s Cluster Database target was showing a pending status. It goes to say that before the maintenance, both showed as up.

So the first thing we did was to bounce the agent and clear its state.

But with our luck, that neither helped nor changed anything. The next thing we did was to check if the agent was communicating with the OMS. We checked from the server where the agent resided to see if it was responding.

We concluded that they were communicating, so that was another theory to discard… Then, we tried something that used to work in 11g: move the upload, state directories, and then bounce the agent, clearing the state.

OEM12c has a great feature that allows you to resynch the agent via OEM. Here are the steps:

Go to Setup –> Manage Cloud Control –> Agents;

Click on the testdrv01 agent;

On the drop down menu from Agent, choose Resynchronization;

Be sure to select “Unblock agent on successful completion of agent resynchronization”.

Once you do that, you will see an output like below:

resyncState: IN executeCommand
resyncState: validated parameters
Starting resync RESYNC_20130827141300 for agent testhost:9999
Getting list of all the Targets to remove from the the agent - testhost:9999
Removing 27 targets from the agent
Removing list of plugins from the agent
Getting list of all the Targets from the repository for the agent - testhost:9999
Pushing list of plugins to the agent
Promoting list of targets to the agent
Re-deploying Metric Extensions to the targets
Saving target collections to the agent
Cleaning state on the agent
size of repos blackout list is 0
Retrieving java callbacks from em_gcha_callbacks table
Pdp settings syncronized successfully
resyncState: resync of agent succeeded!

And voilà!!! Once we did this, we could see the CLUSTER_TEST in our target agent.

I hope this can help you out when you face something like this! I can guarantee that the next time I’m faced with a similar situation, the first thing that I will do is grab a target list from the agent.

]]>http://www.pythian.com/blog/how-to-fix-a-target-with-a-pending-status-in-oem-12cr2/feed/16My First Five Minutes with Oracle Database 12chttp://www.pythian.com/blog/my-first-five-minutes-with-oracle-database-12c/
http://www.pythian.com/blog/my-first-five-minutes-with-oracle-database-12c/#commentsWed, 03 Jul 2013 11:42:27 +0000http://www.pythian.com/blog/?p=56097As many of you know, Oracle Database 12cR1 was released last week, and it did something that I haven’t seen before from an Oracle database release: it caused a blogging storm. According to Steve Karam a.k.a Oracle Alchemist, in less than a week we have had 91 articles from 47 authors regarding this release. For me,...

]]>As many of you know, Oracle Database 12cR1 was released last week, and it did something that I haven’t seen before from an Oracle database release: it caused a blogging storm. According to Steve Karam a.k.a Oracle Alchemist, in less than a week we have had 91 articles from 47 authors regarding this release. For me, an active member of the community, this is one of the coolest things because you could feel that day how the gates had been lifted. Everyone was trying to give you their take on it and transmit their knowledge to you.

Here are my first impressions on my first 5 minutes of handling Oracle 12c:

You need to get familiar with the words container, PDB (Pluggable Database) and CDB (Container Database). This is how Oracle is tackling the multi-tenant environment, so here are the first things I saw in my first 5 minutes with 12c.

Get used to using the con_name and con_id to identify where you are working in and get to know the meaning of the following table

When you create a Pluggable database CDB, you define a Seed for your future PDB’s, so now creating a new Pluggable Database is as easy as pie. Just be sure to have either OMF or PDB_FILE_NAME_CONVERT defined.

These are just a couple of things that you need to get used to when you are managing an Oracle 12c Database, so start reading the official documentation to learn how to manage your 12c Database. Don’t forget to read and create your blog entries for your experiences with this new release.

]]>http://www.pythian.com/blog/my-first-five-minutes-with-oracle-database-12c/feed/2Creating a Physical Standby with RMAN Active Duplicate in 11.2.0.3http://www.pythian.com/blog/creating-a-physical-standby/
http://www.pythian.com/blog/creating-a-physical-standby/#commentsTue, 28 May 2013 15:20:10 +0000http://www.pythian.com/blog/?p=55111Other DBAs have written about this topic, but I wanted it to be available on Pythian’s blog. When I searched for how this was done, other sites were either not very clear on the steps they did, assumed that you already knew what you are doing, or went through the steps too quickly. If this...

]]>Other DBAs have written about this topic, but I wanted it to be available on Pythian’s blog. When I searched for how this was done, other sites were either not very clear on the steps they did, assumed that you already knew what you are doing, or went through the steps too quickly.

If this is your first time building a standby, there is some terminology you need to know before going into any of the steps in creating your physical standby. It will help you to better understand your dataguard environment and what is being done, instead of simply copying a number of steps. These are just the definitions in Oracle’s documentation, but they will help you avoid the arduous search.

LOG_ARCHIVE_DEST_n .- It controls different aspects of how redo transport services transfer redo data from primary database destination to a standby.
This parameter has several attributes that are needed to setup your Dataguard environment, I will only mention the critical ones:

ASYNC .-This is the default, the redo data generated by a transaction need not have been received at a destination which has this attribute before that transaction can commit.
or

SYNC .-The redo data generated by a transaction must have been received by every enabled destination that has this attribute before that transaction can commit.

AFFIRM and NOAFFIRM .- Control whether a redo transport destination acknowledges received redo data before or after writing it to the standby redo log. The default is NOAFFIRM.

DB_UNIQUE_NAME .- Specifies a unique name for the database at this destination. You must specify a name; there is no default value.

VALID_FOR .-Identifies when redo transport services can transmit redo data to destinations based on the following factors:

redo_log_type .-whether online redo log files, standby redo log files, or both are currently being archived on the database at this destination

database_role .-whether the database is currently running in the primary or the standby role

FAL_SERVER .-Specifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name.

FAL_CLIENT .-Specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the FAL_SERVER initialization parameter, to refer to the FAL client.
The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database).

LOG_ARCHIVE_CONFIG .- Enables or disables the sending of redo logs to remote destinations and the receipt of remote redo logs.
This parameter has several attributes, the most important for this exercise is below:

DG_CONFIG .- Specifies a list of up to 30 unique database names (defined with the DB_UNIQUE_NAME initialization parameter) for all of the databases in the Data Guard configuration.

Now that we have the definitions out of the way (which you can find at Oracle 11.2. Documentation), we will continue with the setup of our Physical Standby.

For this exercise, I have the following :

Primary : testgg1 Server : dlabvm13

Standby : testgg2 Server : dlabvm14

The first thing that we need to do is find where the Redo Logs and Datafiles reside in the Primary and where will they reside in the Standby so that you can set your parameters LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT properly. Make sure that these directories have the necessary space to hold the Primary database. If you don’t have this space, then do not continue.

Next, assure that you are in archivelog mode, and that force logging is enabled in your primary.

Now that we are running in archive log mode and force logging is set for the primary, make sure that the Listener/ Tns entries are set correctly and that you can tnsping them both from the primary/standby.

Then, create and replicate the password file from the primary $ORACLE_HOME/dbs, and rename it to the standby database name. The password file name must match the ORACLE_SID used at the standby site, not the DB_NAME.

One of the coolest things about this method is that almost all of the work we will do will be in the Primary database server. The only thing you have to do in the Standby Server is create the locations of my diagnostic files/redo/datafiles/control files, verify the connectivity between the Primary and the Standby and just start the Standby Instance, which is our next step.

The next step is to set the ORACLE_SID, ORACLE_HOME, and ORACLE_BASE for the Standby Instance and open it with minimal options:

As you can see, that was as easy as pie. Now we can just start the recovery process in the Standby Database. In this case, I used Active Dataguard so that I could show you that it is actually working, but just be aware that this is a Licensable Option.

I hope this little guide helps you out when you are trying to build your Physical Standby from an Active Duplicate. As always, test anything that I have said or mentioned before trying it in a production environment.

]]>http://www.pythian.com/blog/creating-a-physical-standby/feed/0The Job Growth of a DBA or How I Learned to Stop Worrying and Love the Bombhttp://www.pythian.com/blog/the-job-growth-of-a-dba-or-how-i-learned-to-stop-worrying-and-love-the-bomb/
http://www.pythian.com/blog/the-job-growth-of-a-dba-or-how-i-learned-to-stop-worrying-and-love-the-bomb/#commentsMon, 27 May 2013 12:58:39 +0000http://www.pythian.com/blog/?p=55315A couple of years ago, I wrote in my personal blog in Spanish that being a DBA (Database Administrator) was the 7th best job opportunity to have in America, with an expected growth rate of 20% over the next 10 years (2010) according to CNNMoney. The other day, I ran into the same report from...

]]>A couple of years ago, I wrote in my personal blog in Spanish that being a DBA (Database Administrator) was the 7th best job opportunity to have in America, with an expected growth rate of 20% over the next 10 years (2010) according to CNNMoney. The other day, I ran into the same report from 2012, where it says that being a DBA is the 5th best job to have in America over the next 10 years, with a growth expectancy of 30.6% for the next 10 years.

For me, that is awesome news since it is expected that a DBA will be a great career choice in the next 10 years. The industry is not shrinking! To the contrary, it is booming.

Why is it growing?

We as a society have passed though several eras (Agricultural, Industrial, etc…). I won’t go into detail with those. What is important to know is that we have now fully entered the Era of Knowledge. If you don’t believe me, I can point you out several articles that explain why this is true. It is important to not only have the information, but also have the knowledge of what can be done with it and how fast to apply this information.

DBAs are the ones responsible for that data and must ensure its availability and easy access in this Era of Knowledge. Today, ideas and information are a main source of economic growth, often more important than land or other tangible resources. The people responsible for this will have a bigger role in this era than we could have ever dreamed of even just a couple of years ago.

I reference Kubrick in this post because DBAs have a large role in this Era and we need to get on that bomb. Like Major T. J. “King” Kong did, we need to prepare ourselves in the best way that we can (FIT-ACER, an MBA, Certifications, Communities, ACE program, etc.), enjoy this explosion that has already started and get a kick out of the ride. I expect that by 2015 or 2016, being a DBA will be one of the most sought after jobs in the working field and that the competition will be harder.