The errors while executing a script/SQL can be logged on to a table (SPERRORLOG, by default).SQL> set errorlogging on --->> errors will be logged onto SPERRORLOG.SQL> set errorlogging on table scott.error_log --->> errors will be logged onto user defined table.SQL> set errorlogging on truncate --->> will truncate all the rows in the table.SQL> set errorlogging on identifier identifier-name --->> useful to query the logging table

From Oracle 11g, we can create a restore point for a specific SCN in the past or a past point in time.SQL> CREATE RESTORE POINT res_jun10 AS OF SCN 2340009;
SQL> CREATE RESTORE POINT res_jun10 AS OF TIMESTAMP to_date('01-04-2010 07:30','DD-MM-YYYY HH24:MI');

New PIVOT (to create a crosstab report on any relational table) and UNPIVOT (to convert any crosstab report to be stored as a relational table) operations. Pivot can produce the output in text or XML.

Table compression occurs on all DML activities. The blocks will be compressed, not the rows.SQL> create table table-name ... compress for all operations;

PL/SQL

Native compilation no longer requires a C-compiler. But plsql_code_type parameter should be NATIVE.SQL> alter session set plsql_code_type = native;

New SIMPLE_INTEGER datatype - subtype of PLS_INTEGER, always NOT NULL, wraps instead of overflows and is faster than PLS_INTEGER. Will be faster in native compilation.

Ability to reference sequences directly(no need to select seq.nextval into :n from dual). We can use :n := seq.nextval;

Dynamic cursor can be converted to ref cursor and vice versa.

Starting with Oracle11g, a CLOB can be passed as an input parameter to EXECUTE IMMEDIATE.

Adaptive Cursors - if a cursor has bind variable, the database observes cursor for a while to see what type of values are passed to the variable and if execution plan needs recalculation. Adaptive cursors are activated and used automatically.

We can maintain version compatibilities at diskgroup level.SQL> alter diskgroup dg-name set attribute 'compatible.rdbms'='11.1';SQL> alter diskgroup dg-name set attribute 'compatible.asm'='11.1';

ASM drops disks and if they remain offline for more than 3.6 hours. The diskgroups default time limit is altered by changing the DISK_REPAIR_TIME parameter with a unit of minutes(M/m) or hours(H/h).SQL> alter diskgroup dg-name set attribute 'disk_repair_time'='4.5h';

In Data Pump import, we can specify how the partitions should transform by using PARTITION_OPTIONS.

Dumpfile can be compressed. In Oracle 10g, only metadata can be compressed. From 11g, both data & metadata can be compressed. Dumpfile will be uncompressed automatically before importing.

Encryption: The dumpfile can be encrypted while creating. This encryption occurs on the entire dumpfile, not just on the encrypted columns as it was in the Oracle Database 10g.

Masking: when we import data from production to test or development instances, we have to make sure sensitive data such as credit card details, etc. are obfuscated/remapped (altered in such a way that they are not identifiable). From 11g, Data Pump enables us do that by creating a masking function and then using that during import.

Multisection backups of same file - RMAN can backup or restore a single file in parallel by dividing the work among multiple channels. Each channel backs up one file section, which is a contiguous range of blocks. This speeds up overall backup and restore performance, and particularly for bigfile tablespaces, in which a datafile can be sized upwards of several hundred GB to TB's.

Archived Redo log failover - this feature enables RMAN to complete backups even when some archiving destinations having missing logs or contain logs with corrupted blocks where local archive log destination is configured along with FRA.

Virtual Private Catalog - a recovery catalog administrator can grant visibility of a subset of registered databases in the catalog to specific RMAN users.RMAN> grant catalog for database db-name to user-name;

SQL Result Cache - new memory area in SGA for storing SQL query results, PL/SQL function results and OCI call results. When we execute a query with the hint result_cache, the results are stored in the SQL Result Cache. Query results caching is 25% faster. The size of the cache is determined by result_cache_max_size, result_cache_max_result, result_cache_mode, result_cache_remote_expiration.

Invisible indexes - indexes will be ignored by the optimizer. Handy for testing without dropping. To make it visible, recreate it.SQL> alter index index-name invisible;

Oracle secure files - 5x faster than normal file systems.

Availability improvements

Ability to apply many patches on-line without downtime (RAC and single instance databases).

New parameters have been added to enhance the default security of the database.* SEC_RETURN_SERVER_RELEASE_BANNER* SEC_PROTOCOL_ERROR_FURTHER_ACTION* SEC_PROTOCOL_ERROR_TRACE_ACTION* SEC_MAX_FAILED_FAILED_LOGIN_ATTEMPTS* SEC_DISABLE_OLDER_ORACLE_RPCS

Manageability improvements

New MEMORY_TARGET, MEMORY_MAX_TARGET parameters. When we set MEMORY_TARGET, Oracle will dynamically assign memory to SGA & PGA as and when needed i.e. MEMORY_TARGET=SGA_TARGET+PGA_AGGREGATE_TARGET. New views related this are v$memory_dynamic_components, v$memory_resize_ops.

New DIAGNOSTIC_DEST parameter as replacement for BACKGROUND_DUMP_DEST, CORE_DUMP_DEST and USER_DUMP_DEST. It defaults to $ORACLE_BASE/diag/.

From 11g, we have two alert log files. One is the traditional alert_SID.log (in DIAGNOSTIC_DEST/trace) and the other one is a log.xml file (in DIAGNOSTIC_DEST/alert). The xml file gives a lot more information than the traditional alert log file. We can have logging information for DDL operations in the alert log files. If log.xml reaches 10MB size, it will be renamed and will create new alert log file. log.xml can be accessed from ADR command line.ADRCI> show alert

Logging information for DDL operations will be written into alert log files, is not enabled by default and we must change the new parameter to TRUE.SQL> ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH;

Parameter(p) file & server parameter(sp) file can be created from memory.SQL> create pfile[=location] from memory;SQL> create spfile[=location] from memory;

From 11g, server parameter file (spfile) is in new format that is compliant with Oracle Hardware Assisted Resilient Data(HARD).

DDL wait option - Oracle will automatically wait for the specified time period during DDL operations and will try to run the DDL again.SQL> ALTER SYSTEM/SESSION SET DDL_LOCK_TIMEOUT = n;

We can define the statistics to be pending, which means newly gather statistics will not be published or used by the optimizer — giving us an opportunity to test the new statistics before we publish them.

From Oracle Database 11g, we can create extended statistics on(i) expressions of values, not only on columns(ii) on multiple columns (column group), not only on single column.

Flashback Data Archive - flashback will make use of flashback logs, explicitly created for that table, in FRA (Flash/Fast Recovery Area), will not use undo. Flashback data archives can be defined on any table/tablespace. Flashback data archives are written by a dedicated background process called FBDA so there is less impact on performance. Can be purged at regular intervals automatically.

Hash value of the passwords in DBA_USERS (in ALL_USERS and USER_USERS) will be blank. If you want to see the value, query USER$.

Default value for audit_trail is DB, not NULL. By default some system privileges will be audited.

LogMiner can be accessed from Oracle Enterprise Manager.

Data Guard improvements

Oracle Active Data Guard - Standby databases can now simultaneously be in read and recovery mode - so use it for running reports 24x7.

Online upgrades: Test on standby and roll to primary.

Snapshot standby database - physical standby database can be temporarily converted into an updateable one called snapshot standby database.

Creation of physical standby is become easier.

From Oracle 11g, we can control archive log deletion by setting the log_auto_delete initialization parameter to TRUE. The log_auto_delete parameter must be coupled with the log_auto_del_retention_target parameter to specify the number of minutes an archivelog is maintained until it is purged. Default is 24 hours (1440 minutes).

Incremental backup on physical readable physical standby.

Offload: Complete database and fast incremental backups.

Logical standby databases now support XML and CLOB datatypes as well as transparent data encryption.

We can compress the redo data that goes to the standby server, by setting compression=enable.

From Oracle 11g, logical standby provides support for DBMS_SCHEDULER.

When transferring redo data to standby, if the standby does not respond in time, the log transferring service will wait for specified timeout value (set by net_timeout=n) and then give up.

In Oracle 11g, block change tracking is now supported in the standby database.

New package and procedure, DBMS_DG.INITIATE_FS_FAILOVER, introduced to programmatically initiate a failover.

SecureFiles SecureFiles provide faster access to unstructured data than normal file systems, provides the benefits of LOBs and external files. For example, write access to SecureFiles is faster than a standard Linux file system, while read access is about the same. SecureFiles can be encrypted for security, de-duplicated and compressed for more efficient storage, cached (or not) for faster access (or save the buffer cache space), and logged at several levels to reduce the mean time to recover (MTTR) after a crash.create table table-name ( ... lob-column lob-type [deduplicate] [compress high/low] [encrypt using 'encryption-algorithm'] [cache/nocache] [logging/nologging] ...) lob (lob-column) store as securefile ...;To create SecureFiles:(i) The initialization parameter db_securefile should be set to PERMITTED (the default value).(ii) The tablespace where we are creating the securefile should be Automatic Segment Space Management (ASSM) enabled (default mode in Oracle Database 11g).Real Application Testing(RAT)Real Application Testing (RAT) will make decision making easier in migration, upgradation, patching, initialization parameter changes, object changes, hardware replacements, and operating system changes and moving to RAC environment. RAT consists of two components:

Database Replay - capture production workload and replay on different (standby/test/development) environment. Capture the activities from source database in the form of capture files in capture directory. Transfer these files to target box. Replay the process on target database.

SQL Performance Analyzer (SPA) - identifies SQL execution plan changes and performance regressions. SPA allows us to get results of some specific SQL or entire SQL workload against various types of changes such as initialization parameter changes, optimizer statistics refresh, and database upgrades, and then produces a comparison report to help us assess their impact. Accessible through Oracle Enterprise Manager or dbms_sqlpa package.

From 11g, while creating global temporary tables, we can specify TEMPORARY tablespaces.

Online application upgrades and hot patching. Features based patching is also available.

Real-time SQL Monitoring, allows us to see the different metrics of the SQL being executed in real time. The stats are exposed through V$SQL_MONITOR, which is refreshed every second.

"duality" between SQL and XML - users can embed XML within PL/SQL and vice versa.

New binary XML datatype, a new XML index & better XQuery support.

Query rewriting will occur more frequently and for remote tables also.

Automatic Diagnostic Repository (ADR)- automated capture of fault diagnostics for faster fault resolution. The location of the files depends on DIAGNOSTIC_DEST parameter. This can be managed from Database control or command line. For command line, execute $ ./adrci

Repair advisors to guide DBAs through the fault diagnosis and resolution process.

The dbms_stats package has several new procedures to aid in supplementing histogram data, and the state of these extended histograms can be seen in the user_tab_col_statistics view:dbms_stats.create_extended_statsdbms_stats.show_extended_stats_namedbms_stats.drop_extended_stats

New package DBMS_ADDM introduced in 11g.

DBMS_COMPARISON is a new package introduced by oracle in database 11g R1 which is used for comparing database objects in different databases.