Archive for April, 2011|Monthly archive page

Most people in IT comprehend the complexity and importance of the DBA role.

Database architects and administrators are the building block of any IT organizations. The area of database management is so vast that some DBAs specialize nowadays in certain parts of the DBA profession: replication, performance tuning, clustering technologies, etc.

No surprise that CNN Money places both the software architect and the database administrator in the Top 100 for the Best Jobs in America. In fact, software architect is the best job in America and database administrator is #7.

For those of you who have spent years in database architecture and administration, this comes as no surprise perhaps.

For those who might be puzzled and for the ones who would like to read more on the subject, check this article from Datavail called Database and DBA Management.

Last year, Sloane Stricker presented the Top 10 reasons DBAs quit. I would like to add what triggers this and what managers should try to do in order to keep the DBAs in the company:

– DBAs need at least 20% white time: that is time on their own to read white papers, blogs, forums, metalink, etc. Time to test, research and debug new features and old issues. Time to attend online seminars, conferences, Oracle Open World, etc.
– Try to find what motivates each DBA. They are all individuals with own preferences and interests. Bonus intensives should be real personal. I value very much the team work but it is just one finger that presses “Enter” upon shutdown immediate.
– Keep their salaries enough competitive. Do not just tie them to the company’s job grade or job band. There is nothing wrong with you most senior DBA having higher annual income than your IT Director. Really!

“The formation of a Data Management Organization (= DMO) is Gartner’s recommended solution for correcting the sorry state of data management in the majority of enterprises. Competitive pressures mean that time is of the essence, Gartner sees the reorganization of the existing database administration groups into an embryonic DMO as a fast-track, but effective, first step in this process. This guidance document provides a framework to direct this organizational transformation.”

As it might sound a bit out of context here, I recommend that you read the whole article.

And finally back to the question in the title: it is both. The DBA should nowadays be both an architect and an administrator. The time of DBA 2.0 is over, it is now time for DBA 3.0!

Rate this:

When learning PL/1 and Fortran 4 in the very beginning of the 80s, I remember how the teacher was telling us that perhaps some day software will be able to create software. Something like that, I am paraphrasing. At that time software development was called programming.

With Oracle11, this became somehow true. What happens behind the curtain is that Oracle’s CBO is able to correct/change some bad execution plans without human intervention. Which actually turns out to be one of the best new features in 11g and for sure the best 11g new feature for Data Center DBAs.

But even in 11.2.0.2, the complete automation of Automatic SQL Tuning is not turned on by default. You have to manually enable it, just like this:

After creating an 11g database, that is all you have to do in order to automate the process! For all the details, check the 11gR2 documentation on Automatic SQL Tuning.

The first time I did this in a mission critical production database, was in November 2007, as soon as Oracle 11g was available for Solaris. However, I did spend 100s of hours before that working/testing on 11.1.0.3 as an alpha tester.

In my opinion, starting 12g, accepting SQL profiles should be enabled by default. However, in order to use automatic SQL tuning and in particular SQL profiles the Tuning and the Diagnostics packs need to be licensed! Automatic SQL tuning via profiles is the most efficient way to massively tune SQL statements: CBO learns from its mistakes!

Back to 11.2.0.2: there are 3 new and updated performance tuning features:

I never understood why IE and Mozilla show me a mixture of Finnish and English in the description fields:

Back to the beginning of my post, have a look at how Oracle fixes its own errors, the Optimizer is optimizing its own execution plans! If you have not yet enabled Automatic SQL Tuning, do not heziate to do it, ask me for help or assistance if needed:

As of today when posting this, 24 April 2011, Metalink returns only 1 article (from all sources) on DBMS_AUTO_SQLTUNE, while I get almost 500 on DBMS_SQLTUNE:

Rate this:

11.2.0.2 is not just a patchset of 11gR2. It is more like 11gR3. Last month, I presented my favorite top 10 new features for DBAs. However, the new table annotation feature deserves special attention.

After a new release of Oracle, it is always worthwhile taking a look into DBA_TABLES (or USER_TABLES). Now, the last new column is called RESULT_CACHE. It has for now two possible values: DEFAULT and FORCE.

FORCE means that the table is annotated. That is, if all table names in a given SQL query have this setting, then the result set returned by the query is always considered for caching unless the NO_RESULT_CACHE hint is specified for the query. If one or more tables named in the query are set to DEFAULT, then the effective table annotation for that query is DEFAULT. Which means the result set will not be cached.

Let us see how it works by first creating a table with 15000 rows and run a full table scan query:

If you want to flush the result cache, just run: exec DBMS_RESULT_CACHE.FLUSH;

Here are the objects related to the topic, it is worth checking them:

Caching is not guaranteed by Oracle even if all tables in the query are annotated. The actual caching depends on internal restrictions for client and server cache, query cache worthiness and space available in the cache. Try to run in a row for instance sql with SYSDATE in it and have a look:

Queries that are not Cached:

• Remote objects
• Complex types in the select list
• Snapshot-based or flashback queries
• Queries executed in a serializable, read-only transaction, or inside a flashback session
• Queries that have PL/SQL functions in them
• Queries that have VPD policies enabled on the tables

How many records can Oracle store? All depends on two init.ora parameters: RESULT_CACHE_MAX_SIZE & RESULT_CACHE_MAX_RESULT:

RESULT_CACHE_MAX_SIZE specifies the maximum amount of SGA memory that can be used by the result cache. Values of this parameter greater than 0 are rounded up to the next multiple of 32 KB. If the value of this parameter is 0, then the feature is disabled. The default value is derived from the values of SHARED_POOL_SIZE, SGA_TARGET, and MEMORY_TARGET. By default, the cache takes 0.025% of the memory_target parameter’s value. However, the max allocation cannot exceed 75% of the size of the shared pool. With my current memory_target of 384M, the result cache max size does not even reach 1M. For production systems, I would recommend that you set it to at least 32M.

RESULT_CACHE_MAX_RESULT specifies the percentage of RESULT_CACHE_MAX_SIZE that any single result can use. The default value is 5% of the RESULT_CACHE_MAX_SIZE. This should be OK in most cases.

And finally, look at the execution plan when annotated tables are involved:

A good question is: so which tables from the database should I annotate? I would probably go through few AWR reports and try to find the repetitive queries run at least 100s of times. Then force annotate the tables used in the queries. And test first plus follow on regular basis what comes from V$RESULT_CACHE_OBJECTS! With Oracle at times you get the opposite of what is expected.

P.S. I often see in AWR reports how WebLogic runs 1000s of times per hour “select * from dual”. In a recent report, I saw the DUAL table as number one segment in terms of logical reads. You can annotate even system tables:

SQL> alter table dual result_cache (mode force);

Table altered.

Rate this:

Believe it or not, the more you automate the activities around database management, the more problems you can get (derived from the automation itself) and on the other hand the more time you can get in order to concentrate on something more urgent, important or interesting. However you see it.

Of course, the argument “If you automate everything well, then you will have almost no issues” is pretty valid but in reality it it is not more than just an argument.

There is even a book called “Easy Oracle Automation” not to mention the 1990s book “High Performance Oracle Database Automation: Creating Oracle Applications with SQL and PL/SQL” when automation in the database had totally different meaning from what it has now.

Just look at the prices above, you can get both books for 10 bucks!

You may decide for yourself if the words easy and Oracle fit in the same sentence. Perhaps yes, perhaps no. Setting AMM for example is just one (or two) parameter(s) in the init.ora file but beware the time you might spend on bug research. Try to search the Metalink bug database for AMM, guess if you will get dozens or grosses of hits.

Are you still with me? Now, there are 5 major ares of database automation:

1. Backup automation: this is the oldest area of automation around the database. Very few DBAs bother to manually backup the databases. However, how many can automate restore and recovery 🙂 But someone should go manually through the logs of failed automated backups and debug the reasons and re-run the backups.

2. Monitoring automation: Grid Control is doing it for you. Just make sure that agents’ status is also monitored for if an agent is down/hung then there is no monitoring. Of course, we want to make sure that the agents’ monitoring mechanism is also under monitoring and so on and so on … 🙂 In my opinion Grid Control is the best tool developed by Oracle around the database. And it is free if you do not use the extra packs. Of course the values of Grid Control without the packs is like the value of a database with no users in it (but then security is not an issue, right?)

3. Automation of general DBA activities: in theory almost everything can be automated: startup and shutdown of the database, etc. Question is: is it really worth it? Do you automate Oracle installations and patching? Well patching can be semi-automated using the provisioning pack in case you have enough time to spend on setting up the system. I have seen all opinions on that subject. Can you automate autoextend of the file system when it is 99% full. No way, right? Do you automatically add datafiles to a tablespace when it is almost full?

4. Security automation: now this is a nice one. Often neglected by some DBAs automated security of the database stack can be extremely beneficial in case of should-not-have-happened situations. Often auditing is configured wrongly. The value you get from properly set database security automation is tremendous. Many companies understand very well the security risks and issues but whether they really comprehend it or not is an open question.

5. Automatic SQL tuning: it is my favorite (I have already and I will talk about it at Oracle Open World). I will post separately on the subject but here in brief what it is:

– Oracle automatically runs the SQL Tuning Advisor on selected high-load SQL statements from the Automatic Workload Repository (AWR) that qualify as tuning candidates
– This task, called Automatic SQL Tuning, runs in the default maintenance windows on a nightly basis
– You can customize attributes of the maintenance windows, including start and end time, frequency, and days of the week
– Automatic SQL tuning runs by default for at most one hour during a maintenance window
– Automatic SQL Tuning needs one-time configuration
– Available from 11g onwards
– One of the best new features in 11g and the best 11g new feature for Data Center DBAs’

Have you seen such promises on how DBA’s life can become easier due to automation?

As yesterday was April 1st, All Fool’s Day, I decided to wait a day or so before posting on the new free Oracle database edition: 11g XE beta. It is in fact out and downloading it took me less than 1 minute even via my slow wireless connection.

You probably often click the “Accept” button and then “Next” but this time I decided to read the entire “Oracle Database Express Edition License Agreement”. You will also probably end up with some open questions. Just one example:

During installation, while reading the agreement you will see: “You may make one copy of the programs for backup purposes.” If you check the oracle.com license information, it says: “You may make a reasonable number of copies of the programs for backup purposes.”. But we should not forget that we are talking about a beta release of the software, so many things are to be fixed.

I would like to comment on two main things: restrictions and 11g upgrade.

1. 11g XE main restrictions and limitations:

– if Oracle Database XE is installed on a computer with more than one CPU (including dual-core CPUs), then it will consume, at most, processing resources equivalent to one CPU.

– only one installation of Oracle Database XE can be performed on a single computer which does not affect any existing installation or new installations of Oracle Database 11g Standard Edition, Oracle Database 11g Standard Edition One, or Oracle Database 11g Enterprise Edition. In addition, users can run only one instance of the Oracle Database XE database on each individual computer.

– the maximum amount of user data in an Oracle Database XE database cannot exceed 11 gigabytes compared to 4 gigabytes on 10g XE. If the user data grows beyond this limit, then an ORA-12592 error will appear. Note that 11G is just user data!

– the maximum amount of RAM that an Oracle Database XE database uses cannot exceed 1 gigabyte, even if more is available. The exact amount of RAM that Oracle Database XE uses is the sum of the System Global Area (SGA) and the aggregate Program Global Area (PGA).

– HTTPS is not supported natively with the HTTP listener built into Oracle Database XE.

– the list of all features not available with 11g XE is here. It is a rather long list and XE lacks lots of important stuff realted to HA, performance, securty, replication and DBAism.

2. Upgrade from 10g XE to 11g XE:

– copy the gen_inst.sql file from the upgrade directory of 11.2 XE home to some local directory
– connect to 10.2 XE database as SYS and run gen_inst.sql thus generating install.sql, gen_apps.sql and other .sql files in the local folder containing gen_inst.sql
– export the data from 10.2 XE database
– deinstall 10.2 XE if installation of 11.2 XE is planned on the same computer
– install the 11.2 XE database
– import the data into the 11.2 XE database
– connect to 11.2 XE database as SYS user and run the script install.sql, which was generated after running gen_inst.sql

The most important benefits of 11g XE are (as I see them):

A. Save from license costs by keeping small databases on top of XE. In case, you can live with the limitations of course.

B. On more global terms, you can use 11g XE for Database Sharding. 11 gigabytes of user data is quite sufficient for one shard! When it grows above say 10G, you will split it into at least two new shards.

P.S. Will 12g XE have the limitation of 12 gigabytes of user data 🙂 ?

Rate this:

Julian is the Global Database Lead of Accenture. His primary responsibility is managing and leading the Global Oracle Technology Practice which includes Autonomous Cloud, IaaS, PaaS, Database Services, Engineered Systems, Java, Middleware, Security and all other areas falling under Oracle Technology. He is also the Accenture-Enkitec Group Managing Director for ... Continue reading →