Tired of cumbersome manual management of intervals in
dbms_job and need a new scheduling system inside the database? Look no further than the database itself.

Some of you may use the
dbms_job package extensively to submit database jobs to be run in the background, control the time or interval of a run, report failures, and much more. However, I have a feeling that most of you don't.

The problem with the package is that it can handle only PL/SQL code segmentsjust anonymous blocks and stored program units. It cannot handle anything outside the database that is in an operating system command file or executable. To do so, you would have to resort to using an operating system scheduling utility such as
cron in Unix or the
AT command in Windows. Or, you could use a third-party tool, one that may even extend this functionality by providing a graphical user interface.

Even so,
dbms_job has a distinct advantage over these alternatives: it is active only when the database is up and running. If the database is down, the jobs don't run. A tool outside the database must manually check if the database is upand that can be difficult. Another advantage is that
dbms_job is internal to the database; hence you can access it via a database access utility such as SQL*Plus.

The Oracle Database 10g Scheduler feature offers the best of all worlds: a job scheduler utility right inside the database that is sufficiently powerful to handle all types of jobs, not just PL/SQL code segments, and that can help you create jobs either with or without associated programs and/or schedules. Best of all, it comes with the database at no additional cost. In this installment, we'll take a look at how it works.

Creating Jobs Without Programs

Perhaps the concept can be best introduced through examples. Suppose you have created a shell script to move archived log files to a different filesystem as follows:

/home/arup/dbtools/move_arcs.sh

We can specify the OS executable directly without creating it as a program first.

One advantage of Scheduler over
dbms_job is pretty clear from our initial example: the ability to call OS utilities and programs, not just PL/SQL program units. This ability makes it the most comprehensive job management tool for managing Oracle Database and related jobs.
However, you may have noted another, equally important advantage: the ability to define intervals in natural language. Note that in the above example we wanted our schedule to run every 30 minutes; hence the parameter
REPEAT_INTERVAL is defined with a simple, English-like expression (not a PL/SQL one)
:

'FREQ=MINUTELY; INTERVAL=30'

A more complex example may help convey this advantage even better. Suppose your production applications become most active at 7:00AM and 3:00PM. To collect system statistics, you want to run Statspack from Monday to Friday at 7:00AM and 3:00PM only. If you use
DBMS_JOB.SUBMIT to create a job, the
NEXT_DATE parameter will look something like this:

Now let's see the equivalent job in
DBMS_SCHEDULER. The parameter
REPEAT_INTERVAL will be as simple as:

'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15'

Furthermore, this parameter value can accept a variety of intervals, some of them very powerful. Here are some more examples:

Last Sunday of every month:

FREQ=MONTHLY; BYDAY=-1SUN

Every third Friday of the month:

FREQ=MONTHLY; BYDAY=3FRI

Every second Friday from the end of the month, not from the beginning:

FREQ=MONTHLY; BYDAY=-2FRI

The minus signs before the numbers indicate counting from the end, instead of the beginning.

What if you wanted to verify if the interval settings are correct? Wouldn't it be nice to see the various dates constructed from the calendar string? Well, you can get a preview of the calculation of next dates using the
EVALUATE_CALENDAR_STRING procedure. Using the first examplerunning Statspack every day from Monday through Friday at 7:00AM and 3:00PMyou can check the accuracy of your interval string as follows:

Next Run on: 03/22/2004 07:00:00
Next Run on: 03/22/2004 15:00:00
Next Run on: 03/23/2004 07:00:00
Next Run on: 03/23/2004 15:00:00
Next Run on: 03/24/2004 07:00:00
Next Run on: 03/24/2004 15:00:00
Next Run on: 03/25/2004 07:00:00
Next Run on: 03/25/2004 15:00:00
Next Run on: 03/26/2004 07:00:00
Next Run on: 03/26/2004 15:00:00

This confirms that your settings are correct.

Associating Jobs with Programs

In the above case, you created a job independently of any program. Now let's create one that refers to an operating system utility or program, a schedule to specify how many times something should run, and then join the two to create a job.

First you need to make the database aware that your script is a program to be used in a job. To create this program, you must have the
CREATE JOB privilege.

This will create a job to be run every 30 minutes that executes the shell script move_arcs.sh. It will be handled by the Scheduler feature inside the databaseno need for
cron or the
AT utility.

Classes, Plans, and Windows

A good job scheduling system worth its salt must support the ability to prioritize jobs. For instance, the statistics collection job suddenly goes into the OLTP workload window affecting performance there. To ensure the stats collection job doesn't consume resources affecting OLTP, you would use
job classes,
resource plans, and
Scheduler Windows.

For example, while defining a job, you can make it part of a job class, which maps to a resource consumer group for allocation of resources. To do that, first you need to define a resource consumer group called, say,
OLTP_GROUP.

Let's examine the various parameters in this call. The parameter
LOGGING_LEVEL sets how much log data is tracked for the job class. The setting
LOGGING_FULL indicates that all activities on jobs in this classcreation, deletion, run, alteration, and so onwill be recorded in the logs. The logs can be seen from the view
DBA_SCHEDULER_JOB_LOG and are available for 45 days as specified in the parameter
LOG_HISTORY (the default being 30 days). The resource consumer group associated with this class is also specified. The job classes can be seen from the view
DBA_SCHEDULER_JOB_CLASSES.

When you create a job, you can optionally associate a class to it. For instance, while creating
COLLECT_STATS, a job that collects optimizer statistics by executing a stored procedure
collect_opt_stats(), you could have specified:

This command will place the newly created job in the class
OLTP_JOBS, which is then governed by the resource plan
OLTP_GROUP, which will restrict how much CPU can be allocated to the process, the maximum number of executions before it is switched to a different group, the group to switch to, and so on. Any job defined in this class will be governed by the same resource plan directive. This capability is particularly useful for preventing different types of jobs from taking over the resources of the system.

The Scheduler Window is a time frame with an associated resource plan used for activating that plan-thereby supporting different priorities for the jobs over a time frame. For example, some jobs, such as batch programs to update databases for real-time decision-support, need high priority during the day but become low priority at night (or vice-versa). You can implement this schedule by defining different resource plans and then activating them using Scheduler Windows.

Monitoring

After a job is issued, you can monitor its status from the view
DBA_SCHEDULER_JOB_LOG, where the column STATUS shows the current status of the job. If it shows
FAILED, you can drill down further to find out the cause from the view
DBA_SCHEDULER_JOB_RUN_DETAILS.

Administration

So far, we've discussed how to create several types of objects: programs, schedules, job classes, and jobs. What if you want to modify some of them to adjust to changing needs? Well, you can do that via APIs provided in the
DBMS_SCHEDULER package.

From the Database tab of the Enterprise Manager 10g home page, click on the Administration link. This will bring up the Administration Screen, shown in Figure 1. All the Scheduler related tasks are found under the heading "Scheduler" to the bottom right-hand corner, shown inside a red ellipse in the figure.

Figure 1: Administration page

All the tasks related to scheduler, such as creating, deleting, and maintaining jobs, can be easily accomplished through the hyper-linked task in this page. Let's see a few of these tasks. We created all these tasks earlier, so the clicking on the Jobs tab will show a screen similar to Figure 2.

Figure 2: Scheduled jobs

Clicking on the job
COLLECT_STATS allows you to modify its attributes. The screen shown in Figure 3 shows up when you click on "Job Name."

Figure 3: Job parameters

As you can see, you can change parameters of the job as well as the schedule and options by clicking on the appropriate tabs. After all changes are made, you would press the button "Apply" to make the changes permanent. Before doing so, you may want to click the button marked "Show SQL", which shows the exact SQL statement that will be issuedif for no other reason than to see what APIs are called, thereby enabling you to understand the workings behind the scene. You can also store the SQL in a script and execute it later, or store it as a template for the future.