DB2 text search indexing is a resource consuming task and it takes times based on number of documents to be indexed, the type and size of the document etc. In this blog, I would like to briefly explain how we can check the progress of ongoing DB2 Text search index updates.

The total number of documents that has to be indexed can be found by checking the number of rows in base table (in this case it is table books) on which index is created or if it is incremental index update that is in progress, then users can check the number of rows in the staging table to get the total number of documents that should be there in collection altogether.

Note: We can get the collection name for any text search index, by querying the SYSIBMTS.TSCOLLECTIONNAMES administrative view.

db2 "select COLLECTIONNAME from SYSIBMTS.TSCOLLECTIONNAMES where INDNAME='MYSCHEMA' and INDSCHEMA='MYTITLEIDX'"

Thus we can check the progress of DB2 Text Search index updates by comparing the number of rows in base table(on which index is created) for initial updates or in staging table for incremental updates with the number of documents in collection got by adminTool.

DB2 Text search event view provides information about indexing status and error events. Events like starting, committing and finishing of index update are also availabe in this event table. Users can get the event view name for a given text search index by querying the SYSIBMTS.TSINDEXES administrative view.

db2 "SELECT EVENTVIEWSCHEMA, EVENTVIEWNAME FROM SYSIBMTS.TSINDEXES WHERE INDSCHEMA='MYSCHEMA' AND INDNAME='MYTITLEIDX'"

We are all seeing it ! The databases are getting bigger and bigger and the queries are becoming more and more complex. Under these circumstances, if a particular SQL query does not perform well then it is very critical to find and remove the performance bottleneck as soon as possible. In this blog, I would like to share a systematic step-by-step method to tune the given query.

Prerequisite: We need to know the most important resources required for query processing.

A query is processed in two phases: compilation and execution. During compilation phase, the most important work done is query optimization. The optimizer considers various access plans and chooses the least expensive one for executing the given query. During execution phase, the query is actually executed by following the steps mentioned in the access plan.

For query-tuning purposes, it is very important to know the major resources required for the query compilation and execution phases. The compilation phase will use/access statement-heap, catalog-cache, package-cache, etc. The execution phase will use/access package-cache, sort-heap, shared-sort-heap, lock-list, buffer-pool, etc. The following block-diagram shows these resources. It also shows which resources are part of agent's private memory and which resources are part of database global memory.

Step 1: Monitor the actual resources consumed while DB2 was processing the given query.

By using access plan ( db2exfmt output ), DB2 Snapshots and/or DB2 SQL Monitoring Functions, we can quickly identify the heavily utilized resources for processing our SQL query. Sometimes, it is enough to know which resource was not sufficient for our query. Let us consider a simple example.

Example 1: If the db2exfmt output shows the following message, then it tells us that statement heap was not enough for query compiler: SQLCA : (Warning SQLCA from compile)
SQLCODE 437; Function SQLNO***; Message token '1'; Warning 'None'

In this case, the access plan could be sub-optimal. Increasing statement heap should help here.

However, many times, it is not enough to just know the heavily utilized resource. We also need to find out the context in which, the resource was utilized. This is where access plan analysis helps a lot !

Step 2: Analyze the access plan to get more insights.

The access plan ( generated by using db2exfmt tool ) provides good insights for query-tuning. We can find out the “context” in which , the given resource was heavily consumed. Let us take an example:

Example 2: If the snapshots/monitoring-functions show one or more of the following symptoms, then it means the sort-heap was very heavily consumed:
- sort overflows
- rows written for a SELECT query
- temporary data logical/physical reads
- hash loops
- hash join overflows

In this case, we need to use section-actuals and find out if the input cardinality of SORT/HSJOIN operation was underestimated or not. If underestimated, then we need to provide more statistics to help optimizer choose a better access plan. Otherwise, we can create an index, which may help us avoid the SORT/HSJOIN operations altogether.

In this way, we can find out the reason behind slow running query and then figure out ways to improve the query performance. In this blog, I have given a high level overview of the step-by-step query tuning method. Stay tuned for some concrete examples ! In my next blog, I will share a scenario, where a statistical view helped improve query performance.

Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.

Above DB2 Universal image is used to update binaries in cases where user have more than one product installed on one installation path. For example if you have DB2 Server product and IIRW (IBM InfoSphere Federation Server Relational Wrappers) installed on the same installation path then you have to use Universal image for fixpack update.

User do not remember what product/products are installed on a location and wants to do fixpack update then Universal image is the only solution. Universal image automatically detects the product or products installed on the location and updates the binaries accordingly.

Starting v10.5 FP3 Universal image is capable of doing fresh installation. From this level user can install the fixpack image to a new location.

Server_t image: DB2-linuxx64-server_t-10.5.0.4-FP004

Above DB2 Server_t image is used to update binaries in cases where user have only one product installed. For example server_t image can be used to update products like: DB2 Server Edition, DB2 Connect Server Edition, DB2 Express Edition, DB2 Client and DB2 Runtime Client.

Can not be used to update when more than one products are installed on the same location.

Can be used to test the fixpack images by installing to new location without impacting product ion databases.

Hi Dave - we are trying to implment Java in our shop (zOS). would
like to know if there would be any major performance challenges in
running a batch applciation on mainframes using DB2 (zOS) vs
running a COBOL applciation with DB2.

Are there any stats available for this to look at..any help in this
direction is greatly appreciated sire..!

Video Tutorial-How To Load Compressed Files directly without Extracting to Harddisk.
Solution:
Create a Named Pipe File in Unix - mkfifo data.del
Unzip -p Compressed File to Named Pipe File
Unzip Process does not extract to Hard Disk
Import / Load from Named Pipe of del insert into table name

This post is useful for DB2 developers, administrators, database architects to learn how to access Oracle database stored procedures from DB2 database sources
A federated system is a special type of distributed database management system (DBMS). A federated system consists of a DB2® database system that operates as a federated server, a database that acts as the federated database, one or more data sources, and clients (users and applications) that access the database and data sources.
The power of a federated system is in its ability to:
1. Correlate data from local tables and remote data sources, as if all the data is stored locally in the federated database
2. Update data in relational data sources, as if the data is stored in the federated database
3. Move data to and from relational data sources
4. Take advantage of the data source processing strengths, by sending requests to the data sources for processing
5. Compensate for SQL limitations at the data source by processing parts of a distributed request at the federated server
A federated procedure is a federated database object that references a procedure on a data source.
Federated procedures are not alternative names for data source procedures in the same way that aliases are alternative names. A federated procedure is defined at the federated database but calls a data source procedure when the federated procedure is invoked. Because the federated procedure is a federated database object, users and client applications can invoke the data source procedure logic by calling a federated procedure. The results of the data source procedure, such as the output parameters, are returned by the federated procedure. Using a federated procedure makes the location of the data source procedure transparent to users and client applications. You use the name of the federated procedure to call the data source procedure.
A federated procedure is to a remote procedure what a nickname is to a remote table. Nicknames and federated procedures are objects on the federated database. A nickname is an object that references an object, such as a table or view, on the data source. With a nickname, you query a data source object. With a federated procedure, you call a data source procedure.
You use the CREATE PROCEDURE (Sourced) statement to register a federated procedure and use the CALL statement to call a procedure. You can embed the CREATE PROCEDURE (Sourced) statement in an application program or issue the statement with dynamic SQL statements.
The CREATE PROCEDURE (sourced) statement can be used to link a remote stored procedure:
- "stand-alone" or in a package - after the SOURCE keyword you could define the procedure you want federate as, in the most complete version, source schemaname.packagename.procname.
- with zero or more input and/or output parameter - in case you have some parameters, after the source name, it's enough define the keyword NUMBER OF PARAMETERS followed by the number of parameters (not the type).
This example shows how to use the CREATE PROCEDURE statement to create a federated procedure for a data source procedure on Oracle.
Step 1: Original Oracle DB source procedure code
CREATE OR REPLACE PROCEDURE ORASCHEMA.TESTFEDERATION (p_in IN VARCHAR2(10), p_out OUT INTEGER)
AS
v_count INTEGER := 0;
BEGIN
SELECT count(*)
INTO v_count
FROM datdb.nation;
p_out := v_count;
END;
Step 2 : Create a DB2 procedure referring to the oracle source procedure code
CREATE OR REPLACE PROCEDURE DB2INST1.REM_TESTFEDERATION
SOURCE ORASCHEMA.TESTFEDERATION
NUMBER OF PARAMETERS 2
FOR SERVER TSMEDB
SPECIFIC TESTFEDERATION;
DB2INST1.REM_TESTFEDERATION=> This is the DB2 name referring to the Oracle source procedure
ORASCHEMA.TESTFEDERATION => This is the Oracle source procedure name
Number of parameters => 2
TSMEDB => This is the server name created by federation setup steps
Below are for reference on Oracle DB:
a. db2 create wrapper orawrapper library 'libdb2net8.a'
b. db2 "create server TSMEDB TYPE ORACLE VERSION 11.1 WRAPPER orawrapper
OPTIONS (ADD NODE 'TMP_NODENAM')"
c. db2 "create user mapping for db2inst1 server TSMEDB options
(ADD REMOTE_AUTHID 'col_team', ADD REMOTE_PASSWORD '*******')"
SPECIFIC => Oracle procedure name
Step 3: To test the procedure just call it
db2 "call DB2INST1.REM_TESTFEDERATION('a',?)"
These posts are my opinions and do not necessarily represent IBM’s positions, strategies, or opinions
Happy Reading!!
Cheers
Ramya Yeleswarapu
mailto :- ramyeles@in[dot]ibm[dot]com

This article provides detailed information about the front-end and back-end processes of IBM DB2 Command Line Processor (CLP) tool. It also discusses how these two processes communicate using IPC queues to execute a command. We have seen many PMRs in this area related to IPC queues and CLP processes. This article will help user, developer and L2 to understand the command execution process and CLP behavior in case of an error. With this knowledge, it is easy to understand the process flow in db2trace files. This article provides only a high level abstract about how IPC queues are used by CLP processes.

To execute a command using IBM DB2 Command Line Processor, IBM ships two executables named as 'db2' and 'db2bp'. On Windows, these are named as db2.exe and db2bp.exe. We can find these executables under <install_dir>\bin directory. These two executables run as two separate processes to execute a command. db2.exe is the front-end process (FP) and db2bp.exe is the back-end process (BP). FP interacts with user and BP remains in back-ground.

FP takes command from user as input, parses it and sends it to BP for execution. BP maintains the connection with database. BP gets created by FP for the first CLP command and remains in back-ground until user issue 'db2 terminate' command to terminate the BP. BP sends SQL statement to server for execution, receives the result from server and passes the result back to FP. FP formats and return the result back to user. FP acts as the user innterface.

FP and BP are two independent processes. Both processes FP and BP use IPC queues to communicate with each other. Each db2 command starts a new FP and FP gets terminated after passing the result to user. All front-end processes with the same parent are serviced by a single back-end process, and therefore share a single database connection. Two FPs with different parent process id (PPID) uses two different BP for command execution. BP terminates itself if it finds that parent process of FP no longer exists.

There are three IPC queues named Request Queue (RQ), Input Queue (IQ) and Output Queue (OQ) used by CLP. CLP uses PID of FP to construct name of OQ and PPID of FP to construct name of RQ and IQ. FP creates OQ at start and deletes OQ at end of the execution. BP creates RQ and IQ at start and deletes RQ and IQ when BP gets terminated. Name of RQ, IQ and OQ gets constructed by FP as below. These are the name of IPC queues on Linux platform.

RQ is the primary link between FP and BP. RQ and IQ gets identified by PPID of FP, but FP does not create these queues. Only BP creates and deletes RQ and IQ. FP sends its PPID to BP as an argument during creation of BP. We can see these queue names in the formated db2 trace file collected using db2trc tool as below:

To understand the flow of command execution, let us take connect command "db2 connect to sample" as example. Also, assume there is no BP running.

On command prompt user has entered "db2 connect to sample" command
-> db2.exe gets "connect to sample" as argument, so FP started
-> FP constructs the name of IPC queues RQ, IQ and OQ
-> FP tries to open RQ
-> FP get error as RQ does not exist that implies there is no BP to serve this FP and a new BP should get created because only BP can create and delete RQ (and IQ).
-> FP runs system command with PPID to start BP
-> BP gets created. BP creates RQ and IQ. BP installs interrupt signal handler too. BP waits on RQ to receive request from FP for command execution.
-> To make sure BP got created, FP tries to open RQ.
-> If FP get RQ handle successfully, open IQ.
-> FP creates Output Queue.
-> FP installs interrupt signal handler.
-> FP parses the command, check syntax and type of command.
-> FP writes a request to RQ for new command execution. Request contains PPID of FP and OQ handle too.
-> When a FP request is received in the RQ, the backend process does the following:
-> Writes in the output queue to tell the FP to send command in the IQ.
-> FP reads OQ and writes command for execution to IQ.
-> FP waits on OQ to get result from BP.
-> BP reads command from IQ and sends request to server for connect.
-> BP receives response from server
-> BP writes the result received from server to OQ.
-> BP goes to IQ and waits there for next instruction by current FP.
-> FP reads the result from OQ.
-> FP writes the formatted result on stdout.
-> FP writes to IQ that it has finished execution.
-> BP reads instruction from IQ and goes to RQ to serve request from next FP. BP continues to wait here in background.
-> FP closes its RQ and IQ handle, deletes OQ and finishes execution.

Since there is only one back-end process to server request from all front-end processes having same parent process id; back-end process can execute only one command at a time. Request from multiple front-end process having same parent process id gets served by back-end process in serial.

To control the communication between FP and BP using message queues, CLP uses four environment variables named: DB2BQTIME, DB2BQTRY, DB2RQTIME and DB2IQTIME. We can get values of these environment variables by executing "db2 LIST COMMAND OPTIONS" command. Above Infocenter page provides detail about these environment variables too.

Cloud computing has almost become a rage among IT industry and businesses today. It is transforming the way organizations create software, applications and the way they do business. The fundamental need of focusing on core business, controlling IT expenditures and adaptability to changing business ecosystem is driving companies to move to cloud.

Establishing a in-house data warehousing and business intelligence (BI) environment is not a trivial task, organizations have to spend millions of dollars to procure hardware, software and then spend months in installation, configuration and optimization before they could actually start using these systems. In addition to this, top it up with the investment in resources for continuous administration and the periodical hardware upgrades to manage growth and keep the momentum going.

All the above factors combined together make it very compelling for companies to make a radical shift of some of their on-premise analytical data warehouse environments to cloud. It simplifies and speeds up analytics without the need of deploying heavy weight infrastructure and teams. On-demand resource provisioning helps in accommodating real-time workload surges without much manual interference. Imagine having a plethora of compute capacity lying idle in server rooms for once or twice a week reporting vs. having it on cloud and only paying for the required usage.

However, irrespective of these innumerable benefits which a cloud can provide it still involves certain challenges which may make businesses wary of putting their data warehouses on cloud:

Security:

What type of data can be put on public cloud?

How secure it is to put sensitive organizational data on public cloud?

Data Volume:

What volume of data can a cloud environment support? Loading huge amount of data which is very typical of a data warehouse requires high bandwidth, how efficiently can a cloud handle that?

Performance:

Performance of a virtual machine on cloud may not match that of a bare metal server.

This may impact the complex analytics being performed on a data warehouse.

Transaction Latency:

What could be the impact on business due to loss in transaction latency arising out of communication over a network (large distance between datacenter and users and/or lower bandwidth), especially in financial world?

There are several vendors who now provide data warehouse and BI as a service but everyone may not be able to handle the complexity of a data warehouse and analytics ecosystem on cloud.

IBM’s BLU acceleration on cloudis an offering which provides a self-service BI and data warehousing on cloud using best in class security and other features to support even the most complex production environments. It is powered by IBM DB2 with BLU Acceleration a next generation in-memory technology. Columnar data processing and high compression rates combined with an enterprise class BI and DW tools like Data Architect, Cognos and compatibility with R help customers transform their data into insights at speed of thought. Through BLU Acceleration on Cloud now even small organizations who could not afford to establish data warehouses earlier can have access to one of the most advanced analytical environment and make the best out of their data at a very low cost.

BLU Acceleration on cloud is available on IBM Softlayer and Amazon Web Services (AWS). I am really excited to invite you to get hands on experience of the technology through a Free Trial (in Beta). Do let us know your feedback or any queries which you may have on this forum.

Although I work for IBM, the views expressed are my own and not necessarily those of IBM and its affiliates.

In DB2 embedded SQL application, users can execute SQL statements from multiple threads using DB2 contexts APIs. Application developers need to use context to switch between the connections and each context is the environment from which an application runs all SQL statements. Each context can be associated with one or more threads within an application. Context switching between the connections is supported in embedded SQL C and C++ applications only. Contexts may be switched between threads in a process but not switched between the processes. Application developers can use multiple contexts to provide support for concurrent transactions in a DB2 embedded SQL C and C++ applications.

How about that ? Too abstract ? Let me be simple and detailed.

A context can be created, attached, detached, and destroyed by using the following APIs in DB2.

sqleBeginCtx Create a context

sqleAttachToCtx Attach a context

sqleDetachFromCtx Detach a context

sqleEndCtx Detach and destroy a context

How to change a session in single thread:

Create contexts as many as sessions, e.g.,

sqleBeginCtx(&ctx1, SQL_CTX_CREATE_ONLY, NULL, &sqlca)

sqleBeginCtx(&ctx2, SQL_CTX_CREATE_ONLY, NULL, &sqlca)

Attach a context that you want to handle, e.g.,

sqleAttachToCtx(ctx1, NULL, &sqlca)

Execute SQL Statements.

To change to another session, detach the current context, and attach another session, e.g.,

sqleDetachFromCtx(ctx1, NULL, &sqlca)

sqleAttachToCtx(ctx2, NULL, &sqlca)

Execute SQLs.

Detach sessions, e.g.,

sqleEndCtx(&ctx1, SQL_CTX_END_ALL, NULL, &sqlca)

sqleEndCtx(&ctx2, SQL_CTX_END_ALL, NULL, &sqlca)

Sample code to change a session in single thread:

void *pContext[2];

sqleBeginCtx(&pContext[0], SQL_CTX_CREATE_ONLY, NULL, &sqlca);

sqleBeginCtx(&pContext[1], SQL_CTX_CREATE_ONLY, NULL, &sqlca);

sqleAttachToCtx(pContext[0], NULL, &sqlca);

EXEC SQL CONNECT TO :dbname USER :user USING :passwd;

EXEC SQL SELECT xxx

EXEC SQL UPDATE xxx

sqleDetachFromCtx(pContext[0], NULL, &sqlca);

sqleAttachToCtx(pContext[1], NULL, &sqlca);

EXEC SQL CONNECT TO :dbname USER :user USING :passwd;

EXEC SQL INSERT xxx

sqleDetachFromCtx(pContext[1], NULL, &sqlca);

sqleAttachToCtx(pContext[0], NULL, &sqlca);

EXEC SQL SELECT xxx

EXEC SQL COMMIT;

EXEC SQL CONNECT RESET;

sqleDetachFromCtx(pContext[0], NULL, &sqlca);

sqleAttachToCtx(pContext[1], NULL, &sqlca);

EXEC SQL UPDATE xxx

EXEC SQL COMMIT;

EXEC SQL CONNECT RESET;

sqleEndCtx(&pContext[0], SQL_CTX_END_ALL, NULL, &sqlca);

sqleEndCtx(&pContext[1], SQL_CTX_END_ALL, NULL, &sqlca);

In the similar way application developers can use the DB2 context APIs in a multi-thread applications as well.

Have a happy context switching in embedded applications. The best thing about IBM DB2 is that you can write very simple to a lot complex applications depending on your need and that is why I like DB2.

One of the techniques to improve database performance and achieve greater scalability is using table partitioning. According to live industry observations when you see the following symptoms you should consider table partitioning

Slow queries that return small amounts or no data
Slow queries that return large amounts of data
Blocking between readers and writers (inserts or updates)

As tables grow in size, it may be easier to manage your data in chunks or by limited ranges. Here are 7 things to remember while table partitioning. I will be using DB2 v 10.5.0.0 PureScale to elucidate these points.

Not all tables are meant to be partitioned. Choosing the right table is not a simple exercise. Several inputs must be considered before taking a decision.

Only tables with higher reads should be partitioned as writing is a far more expensive process than reading. Assess the rows read by MON_GET_WORKLOAD table function I recommend shortlisting tables where the rows_read Vs rows_inserted/updated ratio is above 70:30. For eg: SELECT ROWS_READ, ROWS_RETURNED,ROWS_INSERTEDFROMTABLE(MON_GET_WORKLOAD('',-2));

Another important consideration is the size of the table or to be precise relative size of the table. In any application, some tables are significantly larger than others. There is less benefit in partitioning smaller tables. Shortlist the top 20 percent of tables according table size.

Different members in a DB2® pureScale® instance might require access to a page of data that another member is already using. The process whereby one member requests and is granted a page being used by another member is known as pagereclaiming. RECLAIM WAIT TIME is the amount of time spent waiting on page locks, where the lock request caused a page to be reclaimed.

Pick tables with a high reclaim wait time basically a relative one. Partitioning such tables would distribute rows in different pages, thus might help in reducing reclaim wait. This in turn improves the performance of the Purescale database. This can be monitored using MON_GET_PAGE_ACCESS_INFO table function

SELECT SUBSTR(TABNAME,1,8) AS NAME,

SUBSTR(OBJTYPE,1,5) AS TYPE,

RECLAIM_WAIT_TIME AS RECLAIM_WAIT_TIME

FROM TABLE( MON_GET_PAGE_ACCESS_INFO('', NULL, NULL)

ORDER BY RECLAIM_WAIT_TIME;

Another important question to answer is how many tables need to be partitioned. All tables shortlisted based on the above 3 criteria can partitioned. Some experimentation would be required to determine the ideal number of tables to be partitioned.

Once the table is identified, the next step is to identify the column of the table i.e. partition key column(s). The most common partition key column is the unique identifier (ID) or the primary key. Other possibilities include the date, object names. The final decision is often dependent on the partitioning strategy.

There are several partitioning strategies used. A commonly used partitioning scheme is the date, where you may decide to clump together data in data partitions such as by year or month. You could also have numeric attributes for partitioning, for instance, records with IDs from one up to 1 million are be stored in one data partition, IDs from 1 million to 2 million in another data partition, and so on. Or for example, you could have records for customers with names starting with A-C in one data partition, D-M in the second data partition, N-Q in a third data partition, and R-Z in the last data partition.

Note: Although you have the option of referring to data partitions by names or numbers (useful for data partition operations), they can be completely transparent to applications. That is, applications can continue to access data by specifying column and table names, and do not need to worry about which data partition(s) the data resides in.

Based on the size of the table, number of rows read and partitioning strategy, the number of partitions can be arrived at. Should there be 2 partitions or 10? What happens if the number of partitions increase? If the read write ratio is greater than 7:1, then one can consider multiple partitions. Anything less, the ideal number of partitions should be two. It is important to calculate the read write ratio of the partitions once completed. Anything less, the ideal number of partitions should be two. We can run experiments to identify the ideal number of partitions suitable for the db2 performance.

One of the Business partner for whom we executed a successful performance benchmarking exercise achieving 60 million transaction per hour. They have recommended partitioning and this is what they said:

Table Partitioning is the best way to improve database performance where each
and every query is setup to take advantage of the partitioning column. Partitioning didn't change the behavior of isolation levels.

-Senior Project Lead Business Partner

I will be posting more on the optimization techniques of a live case study where we used PureData for Transactions machine (PDTX) with DB2 pureScale at its heart,

These posts are my opinions and do not necessarily represent IBM’s positions, strategies, or opinions

The administrative task scheduler (ATS)
enables DB2 database servers to automate the execution of tasks. It
also provides a programmable SQL interface, which allows you to build
applications that can take advantage of the administrative task
scheduler.

The administrative task scheduler
manages and runs administrative tasks, which must be encapsulated in
either user-defined or built-in procedures. You can add, update and
remove tasks from the scheduler's task list by using a set of
built-in procedures. You can also monitor the task list and the
status of executed tasks by using administrative views.

DB2 Process responsible for
running scheduled jobs:-

Scheduled tasks are executed by the DB2
autonomic computing daemon, which also hosts the health monitor and
automatic maintenance utilities. This daemon appears in the process
list as db2acd and starts and stops in conjunction with the database
manager. Every five minutes the DB2 autonomic computing daemon checks
for new or updated tasks. To do this, it briefly connects to each
active database and retrieves the new and updated task definitions.
The daemon does not connect to inactive databases. To ensure
scheduled tasks are executed as expected, the database must remain
active and the task's earliest begin time should be at least five
minutes after it is created or updated.

Internally, the
daemon maintains a list of the active tasks. When a task's scheduled
execution time arrives, the daemon connects to the appropriate
database and calls the procedure associated with the task. If the
database is not active, the daemon will not execute the task; it
writes an ADM15502W message in both the administration notification
log and the db2diag.log. If, for some other reason, the daemon fails
to execute the task, an ADM15501W message is written to both the
administration notification log and the db2diag.log. The daemon then
automatically attempts to execute the task every 60 second

The daemon will
never execute a task if a previous instance of the same task is still
outstanding. For example, assume a task is scheduled to run every 5
minutes. If, for some reason, the task takes 7 minutes to complete,
the daemon will not execute another instance of the task at the next
5 minute interval. Instead, the task will run at the 10 minute mark.

The
administrative task scheduler operates independently of the IBM Data
Studio and Database Administration Server (DAS). It is included in
DB2 database servers and is disabled by default. In order to
successfully execute tasks, you must set up the administrative task
scheduler.

Setting up the administrative
task scheduler

Set the DB2_ATS_ENABLE registry
variable to YES, TRUE, 1, or ON

For example:

db2set
DB2_ATS_ENABLE=YES

Create the SYSTOOLSPACE table
space

Like other DB2
administration tools, the administrative task scheduler depends on
the SYSTOOLSPACE table space to store historical data and
configuration information. You can check if the table space already
exists in your database system with the following query:

SELECT
TBSPACE FROM SYSCAT.TABLESPACES WHERETBSPACE
= 'SYSTOOLSPACE'

If your database
does not have this table space, you must create it.Otherwise you
receive an error message when you try to add a task to the
administrative task scheduler:

SQL0204N
"SYSTOOLSPACE" is an undefined name. SQLSTATE=42704

Any user that belongs to the SYSADM or SYSCTRL group has authority to
create this table space. For instructions, refer to "

SYSTOOLSPACE
and SYSTOOLSTMPSPACE table spaces".

For example:

CREATE
TABLESPACE SYSTOOLSPACE IN IBMCATGROUP

MANAGED BY AUTOMATIC STORAGE

EXTENTSIZE 4

Activate
your database. Your database must be active for your tasks to
execute on time. The best way to do this is to use the ACTIVATE
DATABASE command. Alternatively, you can keep a database active if
you maintain at least one database connection at all times.

Results

Once the
administrative task scheduler is set up, the DB2 Autonomic Computing
Daemon starts checking for new or updated tasks by connecting to
active databases every five minutes.

Adding
jobs to ATS

A job is created
by adding a task to the ATS. This can be done in two ways

a)
ADMIN_TASK_ADD procedure to define new scheduled tasks

b) DBMS_JOB
module

Lets see how to
schedule jobs using each one of them.

ADMIN_TASK_ADD
procedure – Schedule a new task

The ADMIN_TASK_ADD procedure schedules an administrative task, which
is any piece of work that can be encapsulated inside a procedure.

Syntax:

ADMIN_TASK_ADD--(--name--,--begin_timestamp--,--------------->

>--end_timestamp--,--max_invocations--,--schedule--,------------>

>--procedure_schema--,--procedure_name--,--procedure_input--,--->

>--options--,--remarks--)

Usage :

Example: Consider a stored procedure(proc_insert)
which has to be scheduled to run at 10:45 AM every day.

PL/SQL
(Procedural Language/Structured Query Language) statements can be
compiled and executed using DB2 interfaces. This support reduces the
complexity of enabling existing PL/SQL solutions so that they will
work with the DB2 data server.

The
supported interfaces include:

*
DB2 command line processor (CLP)

*
DB2 CLPPlus

*
IBM® Data Studio full client

The
DB2 Command Line Processor (CLP) is a command line interface and tool
available with both DB2 servers and DB2 clients from which DB2
commands can be issued, SQL statements executed, and utilities run.
The CLP is essentially a command processor or shell environment that
is customized for working with DB2.The DB2 CLP can be used as a
primary interface for interacting with DB2 instances and databases,
as an alternative to using DB2 graphical user-interfaces, or as an
interface for occasional use. The db2 command starts the
command line processor (CLP).

We
will learn to execute PLSQL statements through Interactive input
mode, characterized by the

db2
=> input prompt.

PL/SQL
statement execution is not enabled from these interfaces by default.
PL/SQL statement execution support must be enabled on the DB2 data
server.

Let
us see how to enable PLSQL statements support in DB2 v9.7 and then
write sample PLSQL programs and learn how to compile and execute them
through DB2 CLP command window.

Enable
PLSQL statements in DB2 environments

#
Open a DB2 command window.

#
Start the DB2 database manager.

db2start

#
Set the DB2_COMPATIBILITY_VECTOR registry variable to the hexadecimal
value ORA that enables the compatibility features that you want to
use.

db2set
DB2_COMPATIBILITY_VECTOR=ORA

#
Set the DB2_DEFERRED_PREPARE_SEMANTICS registry variable to YES to
enable deferred prepare support.

db2set
DB2_DEFERRED_PREPARE_SEMANTICS=YES

#
Issue the db2stop command and the db2start command to stop and then
restart the database manager.

db2stop

db2start

After
the above steps are executed successfully we may continue to create
database and database objects like tables, indexes, views, sequences
etc. The steps to create these will not be discussed as the focus
would be on the PLSQL statements creation and execution.

Remember
to execute the below db2 update cfg command immediately after the
database is created and before creating any db objects like tables,
indexes etc

db2
update db cfg for <dbname> using AUTO_REVAL DEFERRED_FORCE

db2
update db cfg for <dbname> using DECFLT_ROUNDING ROUND_HALF_UP

In
this section we shall learn to execute PLSQL procedures using simple
datatype variables, record types and associative arrays as INPUT
variables to the procedures using DB2 CLP.

Create
and Execute a PLSQL procedure

The
below connection steps will be one time setup for the following SQL
statements.

db2
connect to mydb

db2
set schema DEMO

db2
set PATH=SYSTEM PATH, 'DEMO'

db2
-td@

Here
'@' is the statement termination character.

a.
PLSQL procedure with simple datatype variable as IN parameter
using DB2 CLP

db2
=> CREATE TABLE emp (

name
VARCHAR2(10),

salary
NUMBER ) @

db2
=> CREATE OR REPLACE PROCEDURE fill_comp( p_name IN VARCHAR,

p_sal IN NUMBER,

msg OUT VARCHAR)

AS

BEGIN

insert
into emp VALUES (p_name, p_sal) ;

msg
:= 'The procedure inserts values into emp table:' ;

END
fill_comp;

db2=>
CALL fill_comp('Paul',200)@

The SQL command exceuted successfully

The procedure inserts values into emp table

b.
PLSQL procedure with record type as IN parameter

db2=>
CREATE OR REPLACE PACKAGE REC_PKG

AS

TYPE
EMP_OBJ IS RECORD(

NAME VARCHAR(200),

ADDR VARCHAR(200),

PHONE
NUMBER);

PROCEDURE
fill_comp( IN_EMP IN EMP_OBJ, stat OUT VARCHAR2);

END
@

db2
=> CREATE OR REPLACE PACKAGE BODY REC_PKG

AS

PROCEDURE
fill_comp( IN_EMP IN EMP_OBJ,

stat
OUT VARCHAR2)

AS

BEGIN

insert
into employee values(IN_EMP.NAME,

IN_EMP.ADDR,

IN_EMP.PHONE);

stat
:= ' This procedure inserts into employee';

END
fill_comp;

END
REC_PKG @

db2=>
SET SERVEROUTPUT ON@

DECLARE

EMP_VAL
REC_PKG.EMP_OBJ;

status
VARCHAR2(30);

BEGIN

EMP_VAL.NAME
:= 'Paul';

EMP_VAL.ADDR
:= 'Sandiego';

EMP_VAL.PHONE
:= 1234;

REC_PKG.fill_comp(EMP_VAL,status);

DBMS_OUTPUT.PUT_LINE(status);

@

db2=>
The SQL command completed successfully

This
procedure inserts into employee

PLSQL
procedure with Array type as IN parameter

db2=>
CREATE OR REPLACE PACKAGE REC_PKG

AS

TYPE
EMP_OBJ IS RECORD(

NAME VARCHAR(200),

ADDR VARCHAR(200),

PHONE
NUMBER);

TYPE
emp_array is table of EMP_OBJ INDEX BY BINARY_INTEGER;

PROCEDURE
fill_comp( IN_EMP_ARR IN emp_array, stat OUT VARCHAR2);

END
@

db2
=> CREATE OR REPLACE PACKAGE BODY REC_PKG

AS

PROCEDURE
fill_comp( IN_EMP_ARR IN emp_array ,

stat
OUT VARCHAR2)

AS

BEGIN

FOR
i IN 1 .. IN_EMP_ARR.COUNT

LOOP

insert
into employee values(IN_EMP_ARR(i).NAME,

IN_EMP_ARR(i).ADDR,

IN_EMP_ARR(i).PHONE);

END
LOOP;

stat
:= ' This procedure inserts into employee from an array';

END
fill_comp;

END
REC_PKG @

db2=>
SET SERVEROUTPUT ON@

DECLARE

EMP_ARR
REC_PKG.emp_array;

status
VARCHAR2(30);

BEGIN

EMP_ARR(1).NAME
:= 'Mike;

EMP_ARR(1).ADDR
:= 'Paris';

EMP_ARR(1).PHONE := 4367;

REC_PKG.fill_comp(EMP_ARR,status);

DBMS_OUTPUT.PUT_LINE(status);

@

db2=>
The SQL command completed successfully

This
procedure inserts into employee from an array

This
topic helps you to compile and execute PLSQL procedures, packages
from DB2 CLP interactive input mode. I have tried to explain through
examples, the most frequently used PLSQL procedure formats and how
to perform runtime testing or execute them in DB2 CLP.

These posts are my opinions and do not necessarily represent IBM’s positions, strategies, or opinions

Let's all face it, Cloud Computing was a Fad in 2010, 2011 it drove influence to key IT Decision makers to conduct Cloud POCs in the Enterprise, and in 2012, the drive to adoption will further increase if the current rate of demand continues.

Today, I delivered a webinar on DB2 and Cloud Computing with Databases in the Cloud on IBM DeveloperWorks.

It was an amazing experience talking to DB2 Users and IT Pros in general.

The focus of this article is to illustrate some of the key benefits and resources available for implementing a Cloud Solution.