Friday, October 23, 2009

Usage of ocrconfig to administer Oracle Cluster Registry:Prerequisites:Requires administrative previlege on UNIX based systems or as a user with administrator previleges on windows based systems.Usage:ocrconfig -optionOptions:-backuploc: To change an ocr backup file location.-downgrade: To downgrade ocr to an earlier version.-export: To export the contents of an ocr to a target file.-help: to display help for ocrconfig tool.-import: To import the ocr contents from a previously exported ocr file.-overwrite: To update an ocr configuration that is recorded on the ocr with the current ocr configuration information that is found on the node from which you are running this command.-repair: To update an ocr configuration on the node from which you are running this command with the new configuration information specified by this command-restore: To restore an ocr from an automatically created ocr backup file.-showbackup: To display ocr backup information.-upgrade: To upgrade an ocr to a latest version.

Tuesday, October 13, 2009

How-To Configure and Use Oracle Real Application Clusters (RAC) with Oracle WebLogic Server 10.3:This demo shows how to configure Oracle WebLogic Server to work with Oracle Real Application Clusters (RAC) as well as how to test the connections to the backend Oracle RAC nodes using a web application.

There are multiple configuration options for Oracle RAC features within Oracle WebLogic Server. Oracle recommends using Oracle WebLogic Server JDBC multi data sources. This applies to scenarios with or without global transactions. Also you could configure Oracle WebLogic Server to use Oracle JDBC THIN driver’s connect-time failover as well as Fast Connection Failover from Oracle JDBC driver’s Implicit Connection Cache.

In this demo, we will configure and use Oracle WebLogic Server JDBC multi data sources for failover and load balancing.

Tuesday, October 6, 2009

Task List: Preparing to create a physical standby database:This section provides a checklist of tasks that you perform to create a physical standby database and synchronize it so that it is ready to begin managed recovery. Each step includes a reference to a section that provides additional information.

2. Backup the current production database to create a standby database:

RMAN> backup database include current controlfile for standby plus archivelog;

3. Manually copy the backup sets from the production server to the DR Server (location of backups must match on both production and DR). Make sure all DR filesystems are identical with respect to the target database environment.

4. On the DR Server start up the TEST database in nomount mode. Make sure the parameter file and password file are all present at DR server at $ORACLE_HOME/dbs location for UNIX system.

5. Create the standby database using RMAN. This assumes the database file structures will be identical on both servers. Also you have initialization parameter, password file and backup piece are all placed in their correct locations. The tnsnames.ora and listener.ora must have information for target and auxiliary database and listener is up.

Monday, October 5, 2009

cd: Changes the current directory to the specified directory.du:Displays the total disk space occupied by ASM files in the specified ASM directory and all its subdirectories, recursively.exit:Exits ASMCMD.find:Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.help:Displays the syntax and description of ASMCMD commands.ls:Lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes of all disk groups.lsct:Lists information about current ASM clients.lsdg:Lists all disk groups and their attributes.mkalias:Creates an alias for a system-generated filename.mkdir:Creates ASM directories.pwd:Displays the path of the current ASM directory.rm:Deletes the specified ASM files or directories.rmalias:Deletes the specified alias, retaining the file that the alias points to.

Thursday, August 27, 2009

LogMiner is an Oracle utility which uses a dictionary meta data (online , from a flat file, or from redologs ) to scan the redo/archive logs and generates a set of SQL statements which would have the same effect on the database as applying the corresponding redo record.

Logminer can be used to mine the logs of the current database or a remote database.

The following steps are needed to analyze logs of a remote database.

1) Build the Dictionary file of the production database and scp to the mining/test server.2) Get the list of archive logs which you want to mine from production server and scp to the mining/test server.3) Add Redo Log Files for mining4) Start LogMiner5) Query V$LOGMNR_CONTENTS6) End the LogMiner Session

Wednesday, August 26, 2009

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK:When Row cache contention occurs, if the enqueue cannot be gotten within a certain time period, a trace file will be generated in the location with some trace details.
The trace file tends to contain the words:
>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! <<>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! << address="700000036f27628" cid="0(dc_tablespaces)
hash=a6840aa5 typ=9 transaction=0 flags=00008000
...

The trace will often contain a systemstate dump, although most useful information is in the header section. Typically a session holding the row cache resource will either be on cpu or blocked by another session. If it is on cpu then errorstacks are likely to be required to diagnose, unless tuning can be done to reduce the enqueue hold time. Remember that on a multi node system (RAC) the holder may be on another node and so multiple systemstates from each node will be required. For each enqueue type, there are a limited number of operations that require each enqueue.

DC_TABLESPACES:Probably the most likely is allocation of new extents. If extent sizes are set low then the application may constantly be requesting new extents and causing contention. Do you have objects with small extent sizes that are rapidly growing? (You may be able to spot these by looking for objects with large numbers of extents). Check the trace for insert/update activity, check the objects inserted into for number of extents.

DC_SEQUENCES:Check for appropriate caching of sequences for the application requirements.

DC_USERS:Deadlock and resulting "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!" can occur if a session issues a GRANT to a user, and that user is in the process of logging on to the database.

DC_SEGMENTS:This is likely to be down to segment allocation. Identify what the session holding the enqueue is doing and use errorstacks to diagnose.

The details are also available at:
http://www.metalink.oracle.com/Doc ID:
278316.1Solution to ROW/LIBRARY Cache locks:Set the below event in database level and restart the database. This will resolve the issue.event='32333 trace name context forever, level 10'

Query Rewrite Feature in Materialized Views:--------------------------------------------------------------Setup the Environment:This includes creating a MV_CAPABILITIES_TABLE and a customised function to test this feature on Materialized views.

Tuesday, August 25, 2009

Database Features: 8i vs 9i vs 10G vs 10GR2:
-----------------------------------------------------------
Here is an article which will give you an idea about the major differences in features for various versions of Oracle databases:
-------------------------------------------
Oracle 8i Features:-------------------------------------------
Auditing - Basic setup instructions to allow the auditing of user actions.

Cost Based Optimizer (CBO) And Database Statistics - Lists the available mechanisms for gathering database statistics that are used by the cost based optimizer.

Dropping Columns - Hide unused columns or completely remove them.

Index Organized Tables - Save space and improve the performance of tables which are predominantly access by their primary keys.

RMAN can now readily use all backups, before and after a resetlogs operation, to recover the Oracle database. There is no need to shut down the database to make a backup. This new capability means that the database can be re-opened immediately for the user community after a resetlogs operation.

Oracle Enterprise Manager 10g Grid Control is based on a flexible architecture, which allows you to deploy the Grid Control components in the most efficient and practical manner for your organization. This chapter describes some common configurations that demonstrate how you can deploy the Grid Control architecture in various computing environments.

This chapter presents the common configurations in a logical progression, starting with the simplest configuration and ending with a complex configuration that involves the deployment of high availability components, such as server load balancers, Oracle Real Application Clusters, and Oracle Data Guard.

Wednesday, July 29, 2009

Manually Creating an ASM Instance:Automatic Storage Management (ASM) is a new feature in Oracle10g that alleviates the DBA from having to manually manage and tune disks used by Oracle databases. ASM provides the DBA with a file system and volume manager that makes use of an Oracle instance (referred to as an ASM instance) and can be managed using either SQL or Oracle Enterprise Manager.

Only one ASM instance is required per node. The same ASM instance can manage ASM storage for all 10g databases running on the node.

When the DBA installs the Oracle10g software and creates a new database, creating an ASM instance is a snap. The DBCA provides a simple check box and an easy wizard to create an ASM instance as well as an Oracle database that makes use of the new ASM instance for ASM storage. But, what happens when the DBA is migrating to Oracle10g or didn't opt to use ASM when a 10g database was first created. The DBA will need to know how to manually create an ASM instance and that is what this article provides.

Note: Automatic Storage Management (ASM) requires the use of Oracle Cluster Synchronization Services (CSS), and as such, CSS must be configured and running before attempting to use ASM. The CSS service is required to enable synchronization between an ASM instance and the database instances that rely on it for database file storage.

With Oracle10g R1, CSS was always configured regardless of whether you chose to configure ASM or not. On the Linux / UNIX platform, CSS was installed and configured via the root.sh script. This caused a lot of problems since many did not know what this process was, and for most of them, didn't want the CSS process running since they were not using ASM.

Oracle listened carefully to the concerns (and strongly worded complaints) about the CSS process and in Oracle10g R2, will only configure this process when it is absolutely necessary. In Oracle10g R2, for example, if you don't choose to configure an ASM stand-alone instance or if you don't choose to configure a database that uses ASM storage, Oracle will not automatically configure CSS in the root.sh script.

In the case where the CSS process is not configured to run on the node (see above), you can make use of the $ORACLE_HOME/bin/localconfig script in Linux / UNIX or %ORACLE_HOME%\bin\localconfig.bat batch file in Windows. For example in Linux, run the following command as root to configure CSS outside of the root.sh script after the fact:

Cloud computing center of Oracle:Oracle has played a pioneering role in making Grid Computing relevant to enterprises with ground breaking products such as Oracle Real Applications Clusters (RAC), Automatic Storage Management (ASM), and Storage Grid.

More recently, Oracle has brought Grid Computing to middleware with the Application Grid approach to infrastructure. These products/technologies make the enterprise IT infrastructure elastic so that it can grow incrementally without any theoretical upper limit, as well as provide the flexibility to move resources around in order to meet dynamic business priorities.

In September 2008, Oracle introduced new offerings that allow enterprises to benefit from the developments taking place in the area of Cloud Computing. As a part of our initial offering, Oracle has partnered with Amazon Web Services (AWS) to offer the following products and services:

•Deploy Oracle Software in the Cloud•Backup Oracle Database in the Cloud

Wednesday, July 22, 2009

Storage Options for RAC:In this section we are going to discuss various file system options available to host Oracle Real application Cluster. Lets first understand what are all those file system and then we will see which file systems support Oracle RAC.

For those with limited hands-on experience, implementing Oracle RAC 10g can be an intimidating prospect. But it doesn't have to be that way.

In this guide, I'll provide the precise procedures for converting a single-instance Oracle 10g database to Oracle RAC on RHEL3. We'll use Oracle Automatic Storage Management (ASM), an integrated file system and a volume management solution built into the Oracle database kernel, for RAC database storage because it simplifies database storage management tasks while offering storage reliability and performance.

To summarize on a very high level, there are several ways to convert your database to RAC. If the database is small, you can consider installing a new RAC database on ASM and export/import your data from your current database to the RAC database. For a larger database, the preferred method would be to use RMAN to migrate the database files over to ASM.

The method demonstrated here is a two-phase approach:

first, migrating the data to ASM, andsecond, converting the single-instance database to RAC.

If you are new to ASM, I would recommend taking this migration path to get familiar with ASM before leaping into ASM and RAC at the same time. (For more information about Oracle ASM, visit the Oracle ASM Product Center or refer to the documentation.)

This guide requires a basic understanding of RAC architecture and some familiarity with managing and administering Oracle Database and Red Hat Enterprise Linux. Refer to the documentation for details.

Wednesday, June 10, 2009

Create a Recovery Catalog database:As you have already aware of the RMAN Concepts, it is time to create a recovery catalog database to keep all backup/recovery metadata. Lets first create the environment to create a recovery catalog database.

Pre-requisites:1: It is assumed that oracle software is installed and a normal Oracle database (Ex: RMAND) is already created using dbca.2: It is assumed that the Listener is up and running.3: It is assumed that the tnsnames.ora contains RMAND database connection information.

The above steps will complete the catalog database creation. RMAN schema will have entire RMAN objects which will hold backup/recovery metadata.

Registering the TARGET database:As the recovery catalog database is created. It is time to register the target database which needs to be backed up using RMAN.Pre-requisites:1: It is assumed that oracle software is installed and a normal Oracle database (Ex: RIPO) is already created using dbca.2: It is assumed that the Listener is up and running.3: It is assumed that the tnsnames.ora contains RMAND (Recovery Catalog) and RIPO (Target database) database connection information.

The below document is a nice document for database upgrade to 10GR2.Doc ID: 316889.1 : Complete Checklist for Manual Upgrades to 10gR2

This document will give you all the details regarding database upgrade to any version.Doc ID: 730365.1 : Oracle Database Upgrade Path Reference List

For the above documents you need to have a account with https://metalink.oracle.com/. The account with Oracle for this site is possible only when you have a valid Customer Service Identification (CSI) number available with you.

Components of the RMAN Environment:Component:Target databaseDescription:The control files, datafiles, and optional archived redo logs that RMAN is in charge of backing up or restoring. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The work of backup and recovery is performed by server sessions running on the target database.Required?:Yes

Component:RMAN clientDescription:The client application that manages backup and recovery operations for a target database. The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net.Required?:Yes

Component:Recovery catalog databaseDescription:A database containing the recovery catalog schema, which contains the metadata that RMAN uses to perform its backup and recovery operations.Required?:No

Component:Recovery catalog schemaDescription:The user within the recovery catalog database that owns the metadata tables maintained by RMAN. RMAN periodically propagates metadata from the target database control file into the recovery catalog.Required?:No

Component:Standby databaseDescription:A copy of the primary database that is updated using archived logs created by the primary database. RMAN can create or back up a standby database. You can fail over to the standby database if the primary database goes down.Required?:No

Component: Duplicate databaseDescription:A copy of the primary database that you can use for testing purposes.Required?:No

Component:Media management applicationDescription:A vendor-specific application that allows RMAN to back up to a storage system such as tape.Required?:No

Component:Enterprise ManagerDescription:A browser-based interface to the database, including backup and recovery through RMAN.Required?No

As you know the above mentioned required components (Target Database and RMAN client) are available as soon as you are ready with your target database. The RMAN client is an executable which comes with the database software installation. Fire the below and you will see whether RMAN is installed or not.

Now when you know how to get ready with the required components you should have a basic idea about how RMAN stores all the backup and recovery related information. The below section will provide a clear idea on this.

Storage of the RMAN Repository in the Control File:Because most information in the recovery catalog is also available in the target database's control file, RMAN supports an operational mode in which it uses the target database control file instead of a recovery catalog. This mode is especially appropriate for small databases where installation and administration of a separate recovery catalog database is burdensome. The only RMAN feature that is not supported in NOCATALOG mode is stored scripts.

When you do not use a recovery catalog, the control file is the exclusive source of information about backups and copies as well as other relevant information. The control file contains two types of records:1: Circular reuse records2: Noncircular reuse records.

Circular Reuse Records:Circular reuse records contain noncritical information that is eligible to be overwritten if the need arises. These records contain information that is continually generated by the database. Circular reuse records are arranged in a logical ring. When all available record slots are full, the database either expands the control file to make room for a new record or overwrites the oldest record. The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter specifies the minimum age in days of a record before it can be reused. By default it is 7 days. SO it is always recommended to have CONTROL_FILE_RECORD_KEEP_TIME with a higher value when no catalog database is used to store as much as backup and recovery metadata information.

Noncircular Reuse Records:Noncircular reuse records contain critical information that does not change often and cannot be overwritten. Some examples of information in noncircular reuse records include datafiles, online redo logs, and redo threads.

RMAN Repository:The RMAN repository is the collection of metadata about the target databases that RMAN uses for backup, recovery, and maintenance. Recovery catalog, an external Oracle database in which this information can be stored. The control file has finite space for records of backup activities, while a recovery catalog can store a much longer history. The added complexity of operating a recovery catalog database can be offset by the convenience of having the extended backup history available if you have to do a recovery that goes further back in time than the history in the control file.

There are also a few features of RMAN that only function when you use a recovery catalog. For example, RMAN stored scripts are stored in the recovery catalog, so commands related to them require the use of a recovery catalog. Other RMAN commands are specifically related to managing the recovery catalog and so are not available (and not needed) if RMAN is not connected to a recovery catalog.

The recovery catalog's version of the RMAN repository is maintained solely by RMAN. The target instance never accesses it directly. RMAN propagates information about the database structure, archived redo logs, backup sets, and datafile copies into the recovery catalog from the target database's control file after any operation that updates the repository, and also before certain operations.

The RMAN repository is no different than a normal database. As it holds the most backup/recovery information of all target databases, hence it is required to keep a backup of this database. The backup of this database is always recommended to be taken by using Cold/Hot backup concepts.

Now you know what are all the required components and how RMAN stores the backup/recovery metadata information. Now it is time to setup an environment to do the RMAN backup.

Wednesday, June 3, 2009

Enterprise Manager Grid Control - Version: 10.1 to 10.2Information in this document applies to any platform for DB Console:This article provides following instructions on how to manage the db console application used in a single instance 10G database:1: Create repository and configuration files for the DB Control application2: Drop repository and configuration files for the DB Control application3: Recreate the repository and configuration files for the DB Control applicationCreate DB Control Objects:Option1: Create the Db Control Configuration FilesOption2: Create the DB Control Repository and Configuration Files

Option1:For DB Control 10.1.x, run the command: /bin/emca -rFor DB Control 10.2.x, run the command: /bin/emca -config dbcontrol db

Option2:For DB Control 10.1.x, run the command: /bin/emcaFor DB Control 10.2.x, run the command: bin/emca -config dbcontrol db -repos create

During most of dba issues we deal with Oracle Support for which we need quick communication with Oracle. Somtimes during this support window Oracle Support Team might ask you to upload some files or download some patches which are larger in size and upload facilities in Service Request (SR) might be very slow for that to upload or download the files. Under such circumstances the below can be used to communicate with Oracle Support Team:

External Support FTP site:Scope and ApplicationUsers of this ftp site should be aware of the following:1: Customers and Employees should be familiar with an FTP commands or an FTP tool.2: The following directory structures will be used by all groups in Support:

/ftp/anonymous/support/incoming/

/ftp/anonymous/support/outgoing

3: All files in these directories will be erased every 7 calendar days.4: The outgoing directory is public domain and therefore nothing security sensitive should be placed here (no passwords etc).

Oracle10g New Optimizer Hints:With the 10g database release, there are many new optimizer hints available to control the optimization rules. Here are few for your understanding. Follow the URL and get an idea about the Oracle Hints.http://www.fast-track.cc/10g_24.htm

Stop Jobs:It is not always easy to stop a running job. There are different ways to stop a job:

1: You can change the job_queue_process parameter to 0. This will stop all the jobs running in the database.2: You can find the server process of the job and kill the server process which will stop the job.Find the SID from dba_jobs_running and get the server process for the same SID. Kill the server process bi kill -9 and you are all set.

Runing Jobs:How do you know which job is running on the server. To find the list of jobs running on the server, execute the below:

Tuesday, May 26, 2009

Oracle SQLTXPLAIN:-----------------------------------SQLTXPLAIN is a tool that generates a comprehensive report about a SQL statement which has been provided as input to the tool. The report includes the explain plan for a given SQL statement, cost-based optimizer statistics for the objects accessed by the statement, and details of related objects. The output HTML report produced can be used to analyze and diagnose SQL performance issues (SQL tuning).

During the installation of SQLTXPLAIN, the following parameters will be asked for value.Specify SQLTXPLAIN password:

Enter value for host_string: (For instance, it your db name is prod, then input @prod)Enter value for application_schema: (the schema under which you examine sql statement)Enter value for default_tablespace:Enter value for temporary_tablespace:

SQLT inputs one SQL DML statement provided as one of these methods:1. XPLAIN: As a stand-alone SQL in a flat text file.2. XTRACT: As a memory-resident SQL, identified by its hash_value or sql_id.3. XECUTE: As a stand-alone script that contains one SQL, together with the declaration and values of its bind variables.4. XTRXEC: As a combination of XTRACT and XECUTE, this method gets a SQL from memory and executes both XTRACT and XECUTE.

SQLT provides the following benefits:1. Consistent snapshot of most commonly used pieces of information associated to a SQL analysis and tuning effort, like CBO statistics and performance dynamic views. For most of these objects, inconsistent snapshots of different objects would render the analysis inaccurate or inconclusive.2. Reduce the number of trips (thus overall time) between people involved in the SQL analysis and tuning. This is specially relevant in systems where the person doing the analysis is external to the organization or does not have full access to the system experiencing poor performance.3. Preserves CBO statistics and initialization parameters in effect at the time of the analysis, making it easier to reverse the environment to a known state. Further more, the associated CBO statistics can easily be migrated to a similar system in order to try to replicate the same plan good or bad, thus the query performance.4. Since the tool is installed into its own schema and makes use of temporary objects for most of its data structures, it is lite and moderate intrusive. The schema owner SQLTXPLAIN only gets granted a small subset of roles and attributes in order to generate SQL analysis results. This SQLTXPLAIN schema owner does not read application data itself.

The below metalink note will gove you the full details regarding SQLTXPLAIN tool and its usage. The software is also available in the same note.

Oracle Statspack Maintenance:The Statspack data maintenance scripts must be run as the PERFSTAT user.

SPPURGE.SQL: Purges a limited range of Snapshot IDs for a given database instance.SPTRUNC.SQL: Truncates all performance data in Statspack tablesSPUEXP.PAR: An export parameter file supplied for exporting the whole PERFSTAT user.

This is how you can know what is going on with the statspack owner and tablespace usage.

SQL> select OWNER, sum(bytes)/1024/1024 "Size in MB" from dba_segments where TABLESPACE_NAME='PERFSTAT' group by OWNER order by sum(bytes)/1024/1024 desc;OWNER Size in MB------------------------------ ----------PERFSTAT 21238.60941 row selected.

This is how we can know how long the maintenance has not been done on statspack data. This will also give an idea about the statspack data availability.

Friday, May 22, 2009

Logical vs Physical Standby database:------------------------------------------------The primary functional difference between logical and physical standby database setups is that:Physical standby database: This is always an exact structural duplicate of the master database.Logical standby database: Permits you to add additional objects (tables, indexes, etc) to the database.Logical standby database: Logical standby is based on newer technologies (logical standby is new in Oracle 9.2) and tends to be generally regarded as more tempramental than physical standby.

Physical standby database: This is physically identical to the primary database on a block-for-block basis. It's maintained in managed recovery mode to remain current and can be set to read only. Archive logs are copied and applied.Logical standby database: This is logically identical to the primary database. It is updated using SQL statements.

Physical Standby database: Archived redo logs and FTP'ed directly to the standby database which is always running in "recover" mode. Upon arrival, the archived redo logs are applied directly to the standby database.Logical standby database: This uses LogMiner techniques to transform the archived redo logs into native DML statements (insert, update, delete). This DML is transported and applied to the standby database.Logical standby database: This can have additional materialized views and indexes added for faster performance.

Sed:-------------How to use sed, a special editor for modifying files automatically. If you want to write a program to make changes in a file, sed is the tool to use.

There are a few programs that are the real workhorse in the Unix toolbox. These programs are simple to use for simple applications, yet have a rich set of commands for performing complex actions. Don't let the complex potential of a program keep you from making use of the simpler aspects. This chapter, like all of the rest, start with the simple concepts and introduces the advanced topics later on. A note on comments. When I first wrote this, most versions of sed did not allow you to place comments inside the script. Lines starting with the '#' characters are comments. Newer versions of sed may support comments at the end of the line as well.

Oracle Database Materialized View Refresh:---------------------------------------------------------------Refresh Option:--------------------------COMPLETECRefreshes by recalculating the materialized view's defining query when atomic refresh=TRUE and COMPLETE is the same as FORCE if atomic refresh=FALSE.FASTFRefreshes by incrementally applying changes to the detail tables.FORCE?Uses the default refresh method. If the default refresh method is FORCE, it tries to do a fast refresh. If that is not possible, it does a complete refresh.ALWAYSAUnconditionally does a complete refresh.

Refresh operations:----------------------------DBMS_MVIEW.REFRESHRefresh one or more materialized views.

DBMS_MVIEW.REFRESH_ALL_MVIEWSRefresh all materialized views.

DBMS_MVIEW.REFRESH_DEPENDENTRefresh all table-based materialized views that depend on a specified detail table or list of detail tables.

Required Initialization Parameters for Warehouse Refresh:---------------------------------------------------------JOB_QUEUE_PROCESSESThe number of background processes. Determines how many materialized views can be refreshed concurrently.

JOB_QUEUE_INTERVALIn seconds, the interval between which the job queue scheduler checks to see if a new job has been submitted to the job queue.

UTL_FILE_DIRDetermines the directory where the refresh log is written. If unspecified, no refresh log will be created

Note:These packages also create a log which, by default, is called refresh.log and is useful in helping to diagnose problems during the refresh process. This log file can be renamed by calling the procedure DBMS_OLAP.SET_LOGFILE_NAME ('log filename').

DBMS_SNAPSHOT---------------------DBMS_SNAPSHOT enables you to refresh snapshots that are not part of the same refresh group and purge logs.

Note: DBMS_MVIEW is a synonym for DBMS_SNAPSHOT. This synonym may be used in the future with data warehousing.

Also provided with DBMS_SNAPSHOT is the REFRESH_ALL procedure. This procedure refreshes all materialized views, which were defined using the automatic refreshes.SQL> execute DBMS_SNAPSHOT.REFRESH_ALL;PL/SQL procedure successfully completed.

Data Guard Protection Modes:This section describes the Data Guard protection modes.In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap.

Maximum Availability:This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Maximum Performance:This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).

This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.

This is the default protection mode.

Maximum ProtectionThis protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.

Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.

Setting the Data Protection Mode of a Primary DatabasePerform the following steps to change the data protection mode of a primary database:Step 1 Select a data protection mode that meets your availability, performance and data protection requirements.

Step 2 Verify that redo transport is configured to at least one standby databaseThe value of the LOG_ARCHIVE_DEST_n database initialization parameter that corresponds to the standby database must include the redo transport attributes listed below for the data protection mode that you are moving to.

If the primary database has more than one standby database, only one of those standby databases must use the redo transport settings.

The standby database must also have a standby redo log.

Required Redo Transport Attributes for Data Protection Modes

Maximum Availability:

AFFIRM

SYNC

DB_UNIQUE_NAME

Maximum Performance:

NOAFFIRM

ASYNC

DB_UNIQUE_NAME

Maximum Protection:

AFFIRM

SYNC

DB_UNIQUE_NAME

Step 3 Verify that the DB_UNIQUE_NAME database initialization parameter has been set to a unique name on the primary and standby database.

For example, if the DB_UNIQUE_NAME parameter has not been defined on either database, the following SQL statements might be used to assign a unique name to each database.

Execute this SQL statement on the primary database:

SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='CHICAGO' SCOPE=SPFILE;

Execute this SQL statement on the standby database:

SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='BOSTON' SCOPE=SPFILE;

Step 4 Verify that the LOG_ARCHIVE_CONFIG database initialization parameter has been defined on the primary and standby database and that its value includes a DG_CONFIG list that includes the DB_UNIQUE_NAME of the primary and standby database.

For example, if the LOG_ARCHIVE_CONFIG parameter has not been defined on either database, the following SQL statement could be executed on each database to configure the LOG_ARCHIVE_CONFIG parameter:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(CHICAGO,BOSTON)';

Step 5 Shut down the primary database and restart it in mounted mode if the protection mode is being set to Maximum Protection or being changed from Maximum Performance to Maximum Availability. If the primary database is an Oracle Real Applications Cluster, shut down all of the instances and then start and mount a single instance.For example:

SQL> SHUTDOWN IMMEDIATE;SQL> STARTUP MOUNT;

Step 6 Set the data protection mode.Execute the following SQL statement on the primary database:

If the primary database is an Oracle Real Applications Cluster, any instances stopped in Step 5 can now be restarted.

Step 7 Open the primary database.If the database was restarted in Step 5, open the database:SQL> ALTER DATABASE OPEN;

Step 8 Confirm that the primary database is operating in the new protection mode.Perform the following query on the primary database to confirm that it is operating in the new protection mode:SQL> SELECT PROTECTION_MODE FROM V$DATABASE;

Thursday, May 14, 2009

ASM using ASMLib and Raw Devices:--------------------------------------------This is a beutiful document which covers almost the full setup for an ASM (Automatic Storage Management ) environment. For your reference it covers the following:

Automatic Storage Management (ASM) in Oracle Database 10g:-------------------------------------------------------------- ---------------This document covers a small part of Oracle Automatic Storage Management (ASM) Commands to Manage an ASM instance and database. Also You will get a quick idea how to Migrate a Oracle single instance database to ASM using RMAN.

Managing ASM through SQL interfaces in Oracle Database 10g Release 1 posed a challenge for system administrators who were not very familiar with SQL and preferred a more conventional command-line interface. In Oracle Database 10g Release 2, you have an option to manage the ASM files by using ASMCMD—a powerful and easy-to-use command-line tool.

In Oracle Database 10g Release 1, ASM disk groups are not visible outside the database for regular file system administration tasks such as copying and creating directories. In Oracle Database 10g Release 2, however, you can transfer the files from ASM to locations outside of the disk groups via FTP and through a Web browser using HTTP.

Set asmcmd prompt display:You can invoke asmcmd with the -p option to display the current path, very similar to the UNIX/Linux.PS1="`hostname`*\${ORACLE_SID}-\${PWD}>"export PS1

fred:/u01/app/oracle/admin>

asmcmd command line history:The asmcmd utility does not provide a command history with the up-arrow key. With rlwrapinstalled, this can be fixed by adding the following entry to the ~oracle/.bashrc file:

SQL Help for ASM (Automatic Storage Management):--------------------------------------------------------------Create Diskgroup:Use the CREATE DISKGROUP clause to create a collection of disks. Oracle Database manages a disk group as a logical unit and evenly spreads each file across the disks to balance I/O. Oracle Database also automatically distributes database files across all available disks in disk groups and rebalances storage automatically whenever the storage configuration changes.

This statement creates a disk group, assigns one or more disks to the disk group, and mounts the disk group for the first time. If you want Automatic Storage Management to mount the disk group automatically in subsequent instances, you must add the disk group name to the value of the ASM_DISKGROUPS initialization parameter in the initialization parameter file. If you use an SPFILE, then the disk group is added to the initialization parameter automatically.http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_5008.htm#i2153287

Handle Oracle Password:-----------------------------------Do you know what to do in case as a DBA you forgot the Oracle database accounts' password.

There is a very simple way to handle this issue. Now let me explain the situation here. As a DBA you might be handling thousands of Oracle Database Accounts and might be maintaining the passwords. Sometimes you might needs some account passwords to handle the issue of application users and everybody knows it is very hard to remember application users' passwords. So in such situations when you need the password and you do not have it. Also you do not want to change the password as it might affect the application users and you do not want to spend time sending emails to application users to collect the password. There you go..An easiest way to handle such Oracle passwords:

SQL> Select username,password from DBA_USERS where username='user';

Output will be:Username and encrypted password. Save these information.

SQL> Alter user "user" identified by xyz123;

Output will be:The user's password is changed to xyz123.

Now connect the user with password xyz123 and complete your maintenance acitivity.

After your maintenance activity is over, Change the password to the old password and you are all set. No issues for anybody.

Monday, April 27, 2009

Listener Commands:The listener's involvement is summarized as:The listener catches the request spawns or requests a database process/thread redirects or passes the connection to the process/thread, usually on a different port gets out of the way

Reload:Causes a reread the listener.ora file. Enables the addition or modification of statically configured services without stopping the listener.In addition, the database services, instances, service handlers, and listening endpoints that were dynamically registered with the listener are unregistered and subsequently registered again.LSNRCTL> reload

save_config:Use the SAVE_CONFIG command to compare the current configuration state of the listener, including trace level, trace file, trace directory, and logging to the listener.ora file. Any changes are stored in listener.ora, preserving formatting, comments, and case as much as possible. Prior to modification of the listener.ora file, a backup of the file, called listener.bak, is created.See change_password above

View the configured listener service:LSNRCTL> services

set Parameter:current_listener: set current_listener LSNRCTL> set current_listener uwlistener