Feed aggregator

How do you know if the disks you will be using from ANY particular vendor can muster up the IOPS and MBPS required to satisfy your current or future workloads?

In the article, Measuring Disk I/O, we took a quick look at the amount of IOPS and MBPS, or workload, that our Oracle database is generating. These numbers are very important when we start to look at our system for available throughput, especially out to the disk subsystem. Why are these numbers important? As a very simple example, and no real meaning, suppose you, after running the scripts from that article, find out that your database is requesting, and getting, 100,000 IOPS. Well, if your disk subsystem has 1,000 disks and if every disk is participating in satisfying 100,000 IOPS, you could sort of say that each disk is performing about 100 IOPS. You then have to ask yourself the following questions:

Is this good on a per disk basis?
Do I have room to grow if my throughput were to double or triple?
How much breathing room do I really have?

I have just posted an article on www.thecheapdba.com that will take a look at extracting some I/O statistics so that you can monitor and determine just how well your disks are doing within Oracle.

How can I separate Oracle I/O to maximize performance?
Should I separate data files from index files?
Should I separate redo logs”

These question(s), AND many more, seem to flood our minds as database administrators. They are easy to answer with generalities but in practice can be very difficult to come to a conclusion on unless we take a look at how our disk subsystem is actually performing.

Now, before I get too many comments on why this won’t work, let me just say that this really is a cheap-man’s archive log mover. AND it does assume that you know your archive log process very well and the number of logs defined within your database. Setting the KEEPLOGS parameter inside this script is VERY CRITICAL. Setting this variable to a number higher than the number of redo logs in your database ensures it will never move a log file that is still being written to. This script will move archive logs from one directory to another. It does this based on reverse order of archive log creation and then, depending on how many logs to keep, will skip the first number of logs defined by KEEPLOGS and then move the rest.

@ECHO OFF
TITLE move_archive_lgos.bat
REM move_archive_lgos.bat
REM =====================
REM This script will move archive logs from one directory to another.
REM It does this based on reverse order of archive log creation and
REM then, depending on how many logs to keep, will skip the first
REM number of logs defined by KEEPLOGS and then move the rest.
REM
REM It is advisable to set KEEPLOGS greater than the number of logs
REM defined and alowing for time for them to write out to disk.
REM
DATE /T
TIME /T
SET VERSION_STRING=1.0
SET FROMDIR=\Server_ASHARE_CArchive
SET TODIR=\Server_BSHARE_BACKUPArchive
SET KEEPLOGS=6
SET KEEPDAYS=0
SET c=1
DIR %FROMDIR% /O-D /B > begdir.lst
FOR /F %%I IN (begdir.lst) DO call :MOVELOGFILE %%I
DIR %FROMDIR% /O-D /B > enddir.lst
goto :EOF
goto :EOF
:MOVELOGFILE
IF %c% GTR %KEEPLOGS% (
ECHO %FROMDIR%%1
COPY %FROMDIR%%1 %TODIR%%1
IF EXIST %TODIR%%1 (
ERASE %FROMDIR%%1
)
)
SET /a c=%c%+1
goto :EOF

A lot of the AWR reports ask for, before spitting out their report, the number of days back you would like to go before entering your beginning and ending snapshot IDs. When they report on all the snapshot IDs they have left out, I think, one very important piece of information that just might cloud our judgment when selecting the proper snapshot range. This would be whether, during the snapshot period, there has been a bounce of the database–reseting the statistics to zero.

This script I have here is very similar to the ones in the AWR reports ($ORACLE_HOME/rdbms/admin/awr*) but also shows when the database has been bounced during a snapshot. We can do this by the following SQL which joins the DBA_HIST_DATABASE_INSTANCE and DBA_HIST_SNAPSHOT views—showing historical information on the snapshots in the Workload Repository. We obviously need to join these tables on the dbid, instance_number, and, the important part, startup_time. We also make sure that we only bring back snapshots that are newer than the number of days back specified by the user by comparing the time of the actual snapshot (end_interval_time). Please note that this script will output a status of ‘**db restart**’ for those times that the database was down and unavailable. This is very important as it shows us those times that Oracle was not collecting statistics (the database was down) and more importantly the statistic counters were zeroed. We can report on a bounce condition if the startup_time and begin_interval_time are the same.

Glad to be back. It HAS been awhile and hopefully you forgive thecheapdba for staying away.
BUT this post, I am sure you will like. It is an installation guide for Oracle 11g on Linux CentOS-5.
As the installation is quite lengthy I will just provide you with a link to the main, new and “improved” website location.

Just what is Oracle’s Application Integration Architecture, or Oracle AIA, as many call it? Well, essentially, it’s a set of pre-built and pre-packaged business process integrations that leverage the Oracle BPEL Process Manager to connect multiple Oracle and non-Oracle applications, including Siebel, PeopleSoft and SAP software. Or, as Oracle president Charles Philips once put it, Oracle AIA is “our [...]

The numbers are compelling: Oracle's share is 23.2%. A cluster of five other vendors have between 9% and 14% each. The rest is spread broadly, with each vendor commanding 2% or less. Oracle's share grew 23.3%, compared to growth of just under 12% for the sector as a whole.

I am glad to see this for a bunch of reasons. As Vice President of Embedded Technology at Oracle, I take a personal interest, of course. Oracle Berkeley DB, which Oracle acquired with Sleepycat in 2006, is aimed squarely at the embedded space. I have long maintained that embedded opportunities represent a significant source of new revenue and growth. Computers have escaped the data center, and special-purpose systems are getting deployed in living rooms, in the walls of buildings and in shirt pockets. There is an enormous amount of data travelling over networks and touching these systems.

The key to our success in the embedded space has been to assemble a family of products that address a wide range of requirements. A manufacturer building mobile telephone handsets needs to store crucial information reliably. So does a vendor building an optical network switch, and an ISV developing high-performance equity trading systems for financial markets. The three have very different requirements, though, and it's unrealistic to expect any single product to satisfy all of them.

All of our database products -- Oracle Database, Oracle TimesTen, Oracle Berkeley DB and Oracle Lite -- can be embedded in partner systems and deployed invisibly to end users. All contributed to our number one ranking by IDC.

It's not just the technology that has made us successful, though. The people who choose and deploy embedded databases are software developers. In the enterprise, we generally talk to DBAs and CIOs, but in the embedded world, we talk to architects and CTOs. Those conversations are different, and we have had to develop new expertise and new strategies as we have pursued embedded customers. Over the past several years, we've concentrated on building the technical, support and sales expertise necessary to win embedded business in countries around the globe. IDC's vendor share numbers suggest that we're doing okay.

Congratulations to Oracle's Embedded Global business team, and to the product development and support groups for all four products! This is a tremendous accomplishment.