Tuesday, July 27, 2010

While creating a startup database using dbca the database (11g) creation GUI gives error message in a pop up window,
ORA-00845: MEMORY_TARGET not supported on this system
from where you can ignore the error message.
The similar scenario also occur whenever you try to start your database then startup shows error message like below.

SQL> STARTUP
ORA-00845: MEMORY_TARGET not supported on this system

Cause of the Problem
•Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET.
•On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system.
•And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET.
•The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process.
•The MEMORY_TARGET parameter in 11g comes for (SGA_TARGET+PGA_AGGREGATE_TARGET) which was in 10g.
•And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET parameter which was in 10g.
•The ORA-00845:can arises for the following two reasons on linux system.

1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET.
or,
2)If the shared memory is not mapped to /dev/shm directory.

Solution of the Problem:

Make sure /dev/shm is properly mounted. You can see it by,
#df -h or #df -k command.
The output should be similar like

We see here for /dev/shm we have assigned 1G memory. Now if you set MEMORY_TARGET more than 1G then above ORA-845 will arise. For example if you have MEMORY_TARGET or MEMORY_MAX_TARGET set to 12G then you can mount shared memory to 13g like below.
As a root user,
# mount -t tmpfs shmfs -o size=13g /dev/shm
In order to make the settings persistence so that it will affect after restarting machine add an entry in /etc/fstab similar to the following:

Friday, July 23, 2010

The most crucial and vital structure for recovery operations is the online redo log, which consists of two or more pre-allocated files that store all changes made to the database as they occur. Every instance of an Oracle database has an associated online redo log to protect the database in case of an instance failure.

What is a redo log thread?

Each database instance has its own online redo log groups. These online redo log groups, multiplexed or not, are called an instance's thread of online redo. In typical configurations, only one database instance accesses an Oracle database, so only one thread is present. When running Oracle Real Application Clusters, however, two or more instances concurrently access a single database and each instance has its own thread. The relation ship between Oracle Instance and Database is many-to-one. More than one Instance can access a Database. This kind of configuration is called Parallel Server Configuration.

What those files contain??

Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the rollback segment data block, and the transaction table of the rollback segments.

Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments. Therefore, the online redo log also protects rollback data.

When you recover the database using redo data, Oracle reads the change vectors in the redo records and applies the changes to the relevant blocks.

Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one of the online redo log files by the Oracle background process Log Writer (LGWR). Whenever a transaction is committed, LGWR writes the transaction's redo records from the redo log buffer of the SGA to an online redo log file, and a system change number (SCN) is assigned to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

Redo records can also be written to an online redo log file before the corresponding transaction is committed. If the redo log buffer fills, or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to an online redo log file, even though some redo records may not be committed. If necessary, Oracle can roll back these changes.

How Oracle Writes to the Online Redo Log?

The online redo log of a database consists of two or more online redo log files. Oracle requires

a minimum of two files to guarantee that one is always available for writing while the other is being archived (if in ARCHIVELOG mode).

LGWR writes to online redo log files in a circular fashion. When the current online redo log file fills, LGWR begins writing to the next available online redo log file. When the last available online redo log file is filled, LGWR returns to the first online redo log file and writes to it, starting the cycle again. Filled online redo log files are available to LGWR for reuse depending on whether archiving is enabled. There can be contention between filling up of the on line redo log files and archiving of the redo log files, if they filled faster than they are written to the archived log file. This because online log file written in Oracle blocks and archives are written in OS blocks

If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the changes recorded in it have been written to the data files.

If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the changes recorded in it have been written to the datafiles and once the file has been archived.

What is meant by Active (Current) and Inactive Online Redo Log Files???

At any given time, Oracle uses only one of the online redo log files to store redo records written from the redo log buffer. The online redo log file that LGWR is actively writing to is called the current online redo log file.

Online redo log files that are required for instance recovery are called active online redo log files. Online redo log files that are not required for instance recovery are called inactive.

If you have enabled archiving (ARCHIVELOG mode), Oracle cannot reuse or overwrite an active online log file until ARCn has archived its contents. If archiving is disabled (NOARCHIVELOG mode), when the last online redo log file fills writing continues by overwriting the first available active file.

Which parameter influences the log switches??

LOG_CHECKPOINT_TIMEOUT specifies the amount of time, in seconds, that has passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds. This is time based switching of the log files.

LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks. This block based switching of the log files.

How to add a redo log file group and member??

Suppose you are to add group 5 with 2 members the command is:

ALTER DATABASE

ADD LOGFILE GROUP 10

('c:\oracle\oradata\whs\redo\redo_05_01.log',

'd:\oracle\oradata\whs\redo\redo_05_02.log')

SIZE 100M;

This command is used to add another member to the group already existing.

ALTER DATABASE ADD LOGFILE MEMBER

'c:\oracle\oradata\whs\redo\redo_05_03.log'

TO GROUP 5;

How to move a redo log file from one destination to another destination??

01. Shutdown database normal/immediate but not abort.

Shutdown immediate;

02. Copy the online redo log files to the new location.

Unix use mv command

Windows move command

03. Startup MOUNT database logging in as sysdba (do not open the database)

startup mount pfile=

04. Issue the following statement
Ex
You are changing the file from c:\oracle\oradata\redologs to c:\oracle\oradata\whs\redologs and like wise on d:\ drive.

Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.

The following statement drops redo log group number 3:

ALTER DATABASE DROP LOGFILE GROUP 3;

When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. The control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
To drop an online redo log member, you must have the ALTER DATABASE system privilege. Consider the following restrictions and precautions before dropping individual online redo log members:

It is permissible to drop online redo log files so that a multiplexed online redo log becomes temporarily asymmetric. For example, if you use duplexed groups of online redo log files, you can drop one member of one group, even though all other groups have two members each. However, you should rectify this situation immediately so that all groups have at least two members, and thereby eliminate the single point of failure possible for the online redo log.
An instance always requires at least two valid groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.) If the member you want to drop is the last valid member of the group, you cannot drop the member until the other members become valid. To see a redo log file's status, use the V$LOGFILE view. A redo log file becomes INVALID if Oracle cannot access it. It becomes STALE if Oracle suspects that it is not complete or correct. A stale log file becomes valid again the next time its group is made the active group. You can drop an online redo log member only if it is not part of an active or current group. If you want to drop a member of an active group, first force a log switch to occur. Make sure the group to which an online redo log member belongs is archived (if archiving is enabled) before dropping the member. To see whether this has happened, use the V$LOG view.

To drop specific inactive online redo log members, use the ALTER DATABASE statement with the DROP LOGFILE MEMBER clause.

The following statement drops the redo log 'redo_01_01.log' for group 01 member 01

When an online redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping an online redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log file.

To drop a member of an active group, you must first force a log switch and as a result that member becomes inactive.

How can I force the log switch??
ALTER SYSTEM SWITCH LOGFILE;

How can I Clear an Online Redo Log File??

ALTER DATABASE CLEAR LOGFILE GROUP 3;

This statement overcomes two situations where dropping redo logs is not possible:
(1) If there are only two log groups
(2) The corrupt redo log file belongs to the current group
(3) If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.

While trying to trace an event from user Scott who has connect and resource role privileges i have faced with ORA-01031 error when running the command “alter session set events ’10132 trace name context forever, level 8′;”. After a bit googling i saw that the connect role has changed for 10G R2 and does not have alter session sys privilege. Official Document says “beginning in Oracle Database 10g Release 2 (10.2), the CONNECT role has only the CREATE SESSION privilege, all other privileges are removed.”

Oracle RDBMS databases stores data logically in the form of tablespaces and physically in the form of data files. The datafiles are added to the tablespaces as database space requirement grows bigger. However, there are several reasons you may want to remove or delete data files from a tablespace. Such as accidentally add a wrongly sized or unwanted datafile to a tablespace, or the data space usage has became smaller and some data files want to be removed, or attempt to recover Oracle database which fails to start due to missing or corrupted datafiles by removing them, Oracle does not provide an easy way or user interface to delete or drop datafiles from a tablespace. Once a datafile is made part of a tablespace, it can no longer be detached or removed from the tablespace, albeit there are several workarounds.

How to Completely Drop the Whole Tablespace with All Datafiles

The easiest way to drop a or multiple datafiles is by dropping the entire tablespace together with its datafiles. If you no longer need the data contents of the tablespace, the following command will drop the tablespace, the datafile, and the tablespace’s contents from the data dictionary. All of the objects that where contained in that tablespace are permanently removed.

DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES;

If you don’t specify “AND DATAFILES”, Oracle will not drop the physical datafile after the DROP TABLESPACE command, but you can always delete the files from operating system shell (If the data files are locked, restart the server).

But always perform a backup of the database, as the “DROP TABLESPACE” command is irreversible. It’s also a good practice to check and identify how many datafiles a tablespace actually has before performing the drop action in order to avoid mistake.
How to Check How Many Datafiles a Tablespace Has

To determine and identify all datafiles that link to a tablespace, use the following query, with tablespace name in capital letter:

If a tablespace contains multiple datafiles, and you just want to drop one or some of the datafiles and keep the remaining datafiles together with the objects and contents, the objects or data must be exported for the affected table space. Once exported, the tablespace can be dropped with above “DROP TABLESPACE” command. Then, recreate the tablespace with the datafile(s) required (that you initially want to keep), and then import the objects into the recreated tablespace.

If one or more datafiles is missing after a recovery process or accidental deletion, you can use ALTER DATABASE DATAFILE OFFLINE DROP command to make the datafile offline so that database can starts up after which the troubled tablespace can be dropped. (See instruction to recover from missing datafiles.)

How to Resize a Datafile to Minimum Size

Another alternative to drop the datafile is by shrinking the size of datafile instead of dropping. This option is only possible if there is no extents in the datafile. If there are none, it’s possible to resize the data file down to a very small file (2 blocks), where Oracle database will no longer create any extent in the datafile. However, this workaround does not remove the datafile from tablespace nor delete the file physically, but it reduce the risk as no data will be truncated as only empty blocks are reduced. It simply makes it unusable and takes up almost close to no disk space. To resize a datafile, use the following query:

Friday, July 16, 2010

Temporary tables were introduced in Oracle 8i. There are two types of temporary tables, GLOBAL TEMPORARY and TEMPORARY.

a GLOBAL TEMPORARY table is one whose data is visible to all sessions
a TEMPORARY table has contents only visible to the session that is using it

A temporary table can have session-specific or transaction specific data depending on how the ON COMMIT clause is used in the table's definition. The temporary table doesn't go away when the session or sessions are finished with it; however, the data in the table is removed.
Temporary tables do not have the same redo generation as other tables. The data in the tables are temporary and do not have to be rebuilt in the event of a failure.
Here is an example creation of both a preserved and deleted temporary table:

With the PRESERVE option, the data is kept after a commit while with the DELETE option the data is removed from the table when a COMMIT occurs.

Even with the GLOBAL option set and select permission granted to public on the temporary table we couldn't see the data in that table from another session. However, we could however perform a DESCRIBE on the table and insert values into it, which then the owner of the temporary table couldn't see.

Thursday, July 15, 2010

When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.

Note that table fragmentation is different from file fragmentation. When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.

HWM is an indicator of USED BLOCKS in the database. Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted. Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high water mark when doing a full table scan.

SQL> --After verifying that the table can be redefined online, you manually crea
te an empty interim table (in the same schema as the table to be redefined)
SQL>
SQL> create table TABLE2 as select * from table1 WHERE 1 = 2;
Table created.

Wednesday, July 14, 2010

This query provides a list of queries (1,000 characters only) involved in full table scans. The total number of rows and blocks for each table is also displayed so that you can determine whether the full scan is degrading performance.

SELECT sp.object_owner, sp.object_name,

(SELECT sql_text

FROM v$sqlarea sa

WHERE sa.address = sp.address

AND sa.hash_value = sp.hash_value) sqltext,

(SELECT executions

FROM v$sqlarea sa

WHERE sa.address = sp.address

AND sa.hash_value = sp.hash_value) no_of_full_scans,

(SELECT LPAD (NVL (TRIM (TO_CHAR (num_rows)), ' '),

15,

' '

)

|| ' | '

|| LPAD (NVL (TRIM (TO_CHAR (blocks)), ' '), 15, ' ')

|| ' | '

|| BUFFER_POOL

FROM dba_tables

WHERE table_name = sp.object_name AND owner = sp.object_owner)

"rows|blocks|pool"

FROM v$sql_plan sp

WHERE operation = 'TABLE ACCESS'

AND options = 'FULL'

AND object_owner IN ('USERNAME')

ORDER BY 1, 2;

Or the following query finds recently FULL TABLE Scan queries; means, index are missing:

SELECT *
FROM dba_hist_active_sess_history
WHERE sql_plan_options = 'FULL'
ORDER BY sample_time DESC