One hot day(hyderabad is very hot these days!) , while browsing through Mydevloperworks, found that there is a event develothon in hyderabad!! fetched more information from my favorite search engine! And volunteered to speak in unconference. Started thinking of the topic, no clues till the end of the day. Many a times i get ideas while walking alone on the streets, again that day got a topic while going back home! Yes i can speak on "ECM on cloud" in unconference. It was a nice experience to get so many inputs from peers on the topic, so many modifications(even though i got very less time to speak.. )..

Yep the day came! went through the slides quickly and started to venue. The man(Indeed, he was looking like a king!) at the entrance greeted me with a big smile. The sign boards showed me way to My developer works venue! Again good looking lady greeted with "good morning! may i have your business card please" Oh yes! take this(Frankly It gives me lot of pleasure when people ask for business card.. ). Moved inside and found an empty row and sat alone!

I was very much fascinated about the theme cloud computing! to hear more about that and IBM's space on cloud. Develothon kicked off by video about IBM "THINK". The first session was on mydeveloperworks by himanshu. It was an interaction more than a speech! I really amazed of IBM having 30,000 job categories! Really it scared me... Huff finaly much awaited session on cloud computing started by Srinivas, with his humour he made clouds clash!

At last, unconference started, initially folks were very reluctant to speak. Only two registered, one is me! I was happy that i will get more time to speak in a lime light(copying from invitation of develothon ). Show went on and on end up with10 registrations!!! Many were good speakers! I got to meet a person knowing 15 computer languages(I guess he is more than my grandpa's age!).

Overall, it was an amazing time with like minded people. Thanks to developer works team for making such an event with great enthusiasm, I deeply appreciate all your hard work in the hot city in a mid April. I look forward more such events in future!!!

Rajat
has around 7 years of IT industry experience and had the opportunity to
work extensively in quality driven product development for BI domain.
He has been involved in a lot of customer engagements in showcase of
Packaged Analytics solution and developing Proof of Concept for Banks
and Financial institutes using Cognos softwares. He has worked in
various roles and specializes in Agile methodology for software
development.

Rajat likes to make friends and make sure to connect with them on regular intervals on various social networks.Thing
that top's his hobbies list are listening to music , travelling ,
freaking out with friends and keep my self updated about emerging
technologies.

On April 7, the Object Management Group (http://www.omg.org/), announced the Cloud Standards Customer Council (http://www.cloud-council.org/) to guide how we, as an industry, evolve cloud standards based on real world use cases and experiences.

The goal of the initiative is to:- Drive user requirements into the standards development process.- Establish the criteria for open standards based cloud computing.- Deliver content in the form of best practices, case studies, use cases, requirements, gap analysis and recommendations for cloud standards.

As Cloud Computing continues to evolve it is very important that the cloud remains Open and that there is a process to make cloud standards customer driven.The membership has grown from 45 at the launch to over 100 members in a very short two weeks. Members represent a variety of industries and geographies all committed to working towards an Open Cloud.Mel Greer of Lockheed Martin has been appointed the interim chair of the Council’s Steering Committee. Formal elections of the Steering Committee will take place at the first face-to-face meeting on June 21st.Your support for the Open Cloud Manifesto is appreciated and we see the Cloud Standards Customer Council as an extension of the Manifesto.

You are invited to have your company join the Cloud Standards Customer Council and add your voice and your requirements to the community and become part of a team dedicated to acceleration the adoption of Cloud Computing and ensuring flexibility, portability and interoperability of Cloud services by ensuring that the cloud remains open.Membership is free for qualified end-user organizations. The membership application is available at http://www.cloud-council.org/application. Vendors may join as sponsors. For membership or sponsorship questions, contact Ken Berk at ken.berk@omg.org or +1-781-444 0404.

The first virtual meeting of the CSCC (www.cloud-council.org) has been held. Work is underway to prepare for the first Face to Face meeting in Salt Lake City on June 21st.

Several key areas were discussed, but the ones that surfaced to the top were the identification of candidate Working Groups and also the nomination process for the Steering Committee.

The working group discussion evolved around identifying areas which the members felt that further discussion was needed to ensure that the members' requirements were addressed without reinventing work that has already been completed. The following is a list of the candidate working groups, of which, the ones with the highest priority will be launched on the F2F:

At the F2F meeting, the chairs of the respective WGs will be elected by the WG participants.

In order to ensure order as the CSCC evolves, a Steering Committee will be put in place with the members of the steering committee coming from the membership. Elections of the Steering Committee will be held on June 21st. To take a leadership role within the CSCC as a member of the Steering Committee, one must be a member of the Council first. The process is simple and for qualified enterprises, there is no fee. If you have any questions on the membership process, please contact Ken Berk, ken.berk@cloudcustomercouncil.org or 1.781.444.1132 ext 150

The membership application process can be initiated at www.cloud-council.org/application.

NOTE: Membership has tripled since the announcement in April and is now over 140 companies.

Once the membership process has been completed, one can submit their nomination to be a member of the steering committee by sending an email to info@cloud-council.org.

Look forward to your participation in the CSCC and adding your voice to Open Cloud Computing.

With
every DB2 UDB database there are two different databases logging mode,

Circular database mode & Archival
database mode.When the database is created by default it is in circular mode
& as per the business Requirement you can change the mode from circular to
archival logging mode.Also with the database creation there are 3 primary and 2
secondary log files gets created.

Circular logging mode

Circular
logging is a default logging mode when it is used, records stored into the

Log
buffer are written to primary log files in a circular fashion. Redo log entries
are written to the current “active” log file and when that log file becomes
full , it is marked as “unavailable”.DB2 marks next log file in the sequence as
a active log file and continue writing log entries into it. And when that log
file becomes full , then process

gets
repeated. When the transactions are terminated the corresponding log records
are released because they are no longer needed. When all the records stored in
an individual log file are released, that log file is treated as being
“reusable” and it becomes “active log file “ for the next transactions thus its
contents are overwritten by new log entries.

gets back
to the primary log file which is marked as “unavailable” then DB2 databasemanager will allocate secondary log file. Now
the log entries are being written into the secondary log file.As soon as this secondary
log file becomes full,

the DB2
Database Manager will check the primary log file again and if its status

is still
“unavailable”, another secondary log file is allocated and redo entries are
being written into it. This process continues until all the secondary log files
get full,

which is
indicated by the parameter “logsecond” . Once all the secondary log files get
full and there is no primary log file available for writing redo entries then
following error message will generated :

SQL0964C
The transaction log
for the database is full.

Here you
have to increase number of secondary log files .

Ødb2
update db cfg for db_name using logsecond ‘value’

In circular
logging mode as the contents of log file are overwritten you can recover the
database up to the last full database backup performed. and you cannot perform
the point in time recovery with circular logging mode.

Archival logging mode

Like circular logging mode, when
archival logging is used the redo log entries from log buffer are written into
the primary log files. However unlike circular logging these log files are
never reused.when
all records stored in an individual log file are released, that file is marked
as being “archived” rather than as being “reusable” and it can be used again if
it is needed for the roll forward recovery. When the first primary log file
becomes full , next primary log file is allocated so that the desired number of
primary log files “logprimary”is always
available for use. All the log entries related to a single transaction must fit
within the active log space available.In case of log running transaction it
requires more log space other than the primary log file size thus the secondary
log file may be allocated and used.

In
this mode once the log file gets full another primary log file is allocated and the transaction information
gets logged into that. During that period of time the first primary log file is
being archived to another destination configured by the parameter “Logarchmeth1”
and can be reusable for the further log information storage. Same process is
repeated until the disk space is available for the archived destination.

In case of
archival logging mode you can recover your database up to a current state or up
to a particular state by using the point in time recovery options which are
described in later section of this article. Here the archived log files are be
used for the point in time recovery.

b)How to change the database logging
mode:

As
described earlier wherever the DB2 UDB database is created by default it is in
Circular logging mode , if you want to change the mode from circular to
archival mode

Perform
following steps:

Step
1:change the following parameters

i)logretain=recovery
or userexit=on

Ødb2 terminate

Ødb2 force application all

Ødb2 update db cfg for db_name using
Logretain recovery

or

Ødb2 update db cfg for db_name using
Userexit=on

ii)blk_log_dsk_ful=yes

Ødb2 update db cfg for db_name using
blk_log_dsk_ful yes

Setting blk_log_dsk_ful to yes causes applications to hang when DB2 encounters a log
disk full error, thus allowing you to resolve the error and allowing the
transaction to complete. You can resolve a disk full situation by moving old
log files to another file system or by enlarging the file system, so that
hanging applications can complete.

SQL1116N A connection to or activation of
database "db_name" cannot be made because of backup pending:sqlstate=57019

This is because we have changed the database
logging mode from circular to archival so we must perform full database backup
here, as the backup of circular logging mode cannot be applicable to archival
logging mode and vice versa.

Ødb2 backup database
db_name to d:\db_name\backup

Ødb2 connect to db_name

Step
2: Set the archival destination

To
set the archival destination create a folder “archive” on a disk where you have
sufficient disk space so that the log files will get archived to this location
once it is filled. Keep your archive destination other than log files
destination.

Let's all face it, Cloud Computing was a Fad in 2010, 2011 it drove influence to key IT Decision makers to conduct Cloud POCs in the Enterprise, and in 2012, the drive to adoption will further increase if the current rate of demand continues.

Today, I delivered a webinar on DB2 and Cloud Computing with Databases in the Cloud on IBM DeveloperWorks.

It was an amazing experience talking to DB2 Users and IT Pros in general.

The focus of this article is to illustrate some of the key benefits and resources available for implementing a Cloud Solution.

On Feb 1st, 2012, I gave a seminar on my experience working with IBM Optim Tool. This session was well attended;

In the presentation, I talked about my expriences with the product and the way we customised the tool for our needs. The most intriguing thing about IBM Optim tool is that how well it binds with the backend database. We used DB2 for zOS but I'm sure Optim is as well integrated good enough with other databases.

We were able to implement a comprehensive Archial and Purge Solution in less than 3 months. This is a tremendous achievement keeping in mind that everyday data archival was in the range of about 0.3 million rows!

PL/SQL
(Procedural Language/Structured Query Language) statements can be
compiled and executed using DB2 interfaces. This support reduces the
complexity of enabling existing PL/SQL solutions so that they will
work with the DB2 data server.

The
supported interfaces include:

*
DB2 command line processor (CLP)

*
DB2 CLPPlus

*
IBM® Data Studio full client

The
DB2 Command Line Processor (CLP) is a command line interface and tool
available with both DB2 servers and DB2 clients from which DB2
commands can be issued, SQL statements executed, and utilities run.
The CLP is essentially a command processor or shell environment that
is customized for working with DB2.The DB2 CLP can be used as a
primary interface for interacting with DB2 instances and databases,
as an alternative to using DB2 graphical user-interfaces, or as an
interface for occasional use. The db2 command starts the
command line processor (CLP).

We
will learn to execute PLSQL statements through Interactive input
mode, characterized by the

db2
=> input prompt.

PL/SQL
statement execution is not enabled from these interfaces by default.
PL/SQL statement execution support must be enabled on the DB2 data
server.

Let
us see how to enable PLSQL statements support in DB2 v9.7 and then
write sample PLSQL programs and learn how to compile and execute them
through DB2 CLP command window.

Enable
PLSQL statements in DB2 environments

#
Open a DB2 command window.

#
Start the DB2 database manager.

db2start

#
Set the DB2_COMPATIBILITY_VECTOR registry variable to the hexadecimal
value ORA that enables the compatibility features that you want to
use.

db2set
DB2_COMPATIBILITY_VECTOR=ORA

#
Set the DB2_DEFERRED_PREPARE_SEMANTICS registry variable to YES to
enable deferred prepare support.

db2set
DB2_DEFERRED_PREPARE_SEMANTICS=YES

#
Issue the db2stop command and the db2start command to stop and then
restart the database manager.

db2stop

db2start

After
the above steps are executed successfully we may continue to create
database and database objects like tables, indexes, views, sequences
etc. The steps to create these will not be discussed as the focus
would be on the PLSQL statements creation and execution.

Remember
to execute the below db2 update cfg command immediately after the
database is created and before creating any db objects like tables,
indexes etc

db2
update db cfg for <dbname> using AUTO_REVAL DEFERRED_FORCE

db2
update db cfg for <dbname> using DECFLT_ROUNDING ROUND_HALF_UP

In
this section we shall learn to execute PLSQL procedures using simple
datatype variables, record types and associative arrays as INPUT
variables to the procedures using DB2 CLP.

Create
and Execute a PLSQL procedure

The
below connection steps will be one time setup for the following SQL
statements.

db2
connect to mydb

db2
set schema DEMO

db2
set PATH=SYSTEM PATH, 'DEMO'

db2
-td@

Here
'@' is the statement termination character.

a.
PLSQL procedure with simple datatype variable as IN parameter
using DB2 CLP

db2
=> CREATE TABLE emp (

name
VARCHAR2(10),

salary
NUMBER ) @

db2
=> CREATE OR REPLACE PROCEDURE fill_comp( p_name IN VARCHAR,

p_sal IN NUMBER,

msg OUT VARCHAR)

AS

BEGIN

insert
into emp VALUES (p_name, p_sal) ;

msg
:= 'The procedure inserts values into emp table:' ;

END
fill_comp;

db2=>
CALL fill_comp('Paul',200)@

The SQL command exceuted successfully

The procedure inserts values into emp table

b.
PLSQL procedure with record type as IN parameter

db2=>
CREATE OR REPLACE PACKAGE REC_PKG

AS

TYPE
EMP_OBJ IS RECORD(

NAME VARCHAR(200),

ADDR VARCHAR(200),

PHONE
NUMBER);

PROCEDURE
fill_comp( IN_EMP IN EMP_OBJ, stat OUT VARCHAR2);

END
@

db2
=> CREATE OR REPLACE PACKAGE BODY REC_PKG

AS

PROCEDURE
fill_comp( IN_EMP IN EMP_OBJ,

stat
OUT VARCHAR2)

AS

BEGIN

insert
into employee values(IN_EMP.NAME,

IN_EMP.ADDR,

IN_EMP.PHONE);

stat
:= ' This procedure inserts into employee';

END
fill_comp;

END
REC_PKG @

db2=>
SET SERVEROUTPUT ON@

DECLARE

EMP_VAL
REC_PKG.EMP_OBJ;

status
VARCHAR2(30);

BEGIN

EMP_VAL.NAME
:= 'Paul';

EMP_VAL.ADDR
:= 'Sandiego';

EMP_VAL.PHONE
:= 1234;

REC_PKG.fill_comp(EMP_VAL,status);

DBMS_OUTPUT.PUT_LINE(status);

@

db2=>
The SQL command completed successfully

This
procedure inserts into employee

PLSQL
procedure with Array type as IN parameter

db2=>
CREATE OR REPLACE PACKAGE REC_PKG

AS

TYPE
EMP_OBJ IS RECORD(

NAME VARCHAR(200),

ADDR VARCHAR(200),

PHONE
NUMBER);

TYPE
emp_array is table of EMP_OBJ INDEX BY BINARY_INTEGER;

PROCEDURE
fill_comp( IN_EMP_ARR IN emp_array, stat OUT VARCHAR2);

END
@

db2
=> CREATE OR REPLACE PACKAGE BODY REC_PKG

AS

PROCEDURE
fill_comp( IN_EMP_ARR IN emp_array ,

stat
OUT VARCHAR2)

AS

BEGIN

FOR
i IN 1 .. IN_EMP_ARR.COUNT

LOOP

insert
into employee values(IN_EMP_ARR(i).NAME,

IN_EMP_ARR(i).ADDR,

IN_EMP_ARR(i).PHONE);

END
LOOP;

stat
:= ' This procedure inserts into employee from an array';

END
fill_comp;

END
REC_PKG @

db2=>
SET SERVEROUTPUT ON@

DECLARE

EMP_ARR
REC_PKG.emp_array;

status
VARCHAR2(30);

BEGIN

EMP_ARR(1).NAME
:= 'Mike;

EMP_ARR(1).ADDR
:= 'Paris';

EMP_ARR(1).PHONE := 4367;

REC_PKG.fill_comp(EMP_ARR,status);

DBMS_OUTPUT.PUT_LINE(status);

@

db2=>
The SQL command completed successfully

This
procedure inserts into employee from an array

This
topic helps you to compile and execute PLSQL procedures, packages
from DB2 CLP interactive input mode. I have tried to explain through
examples, the most frequently used PLSQL procedure formats and how
to perform runtime testing or execute them in DB2 CLP.

These posts are my opinions and do not necessarily represent IBM’s positions, strategies, or opinions

The topics
in this section describe how to package and test a Windows Azure
application, how to create and deploy a hosted service in Windows
Azure, and how to delete a hosted service that is no longer in use.

How to Package your Service

The CSPack Command-Line Tool packages your service to be deployed to the Windows Azure fabric. The cspack.exe
utility generates a service package file that you can upload to Windows
Azure using the Windows Azure Platform Management Portal. By default
the package is named .cspkg, but you can specify a different name if you choose.

CSPack command-line example

The
following command line example creates a package that can be deployed
as a hosted service. It specifies the service definition file to use.
Using the /role option it specifies the directory
where the binaries for the role reside and the DLL where the entry
point of the role is defined. Using the /out option it specifies the
out location for the role binaries and the name of the package file.

How to Deploy a Service in Windows Azure

The
Windows Azure SDK provides an environment and tools for developing
services to be deployed to Windows Azure. You can use the Windows Azure
compute emulator and storage emulator to debug your application and
perform mixed-mode testing. Then use the CSPack Command-Line Tool to
package the application for deployment to the Windows Azure staging or
production environment.

The following figure shows the stages of service development and deployment.

You can
debug your service locally, without connecting to Windows Azure, by
using the compute and storage emulators. The Windows Azure compute
emulator simulates the Windows Azure fabric, letting you run and test
your service locally to ensure that it writes adequate information to
the log. After your service is deployed to the Windows Azure staging or
production environment, logging messages and alerts is the only way to
gather debugging information. You cannot attach a debugger to a
service that is deployed in Windows Azure. For more information about
using the compute emulator to debug your service.

The
storage emulator service simulates the Windows Azure storage services,
letting you run and debug code that calls into the storage services
and, together with the compute emulator, helps you test your service in
the local environment. Once your service is running in the local
development environment, you can change your configuration files to
connect to Windows Azure and test against the production storage
services in mixed mode.

To learn
more about how to configure storage access URIs to access storage
resources in Windows Azure or the storage emulator,

When
your service is connected to the Windows Azure production storage
services it runs in mixed mode, meaning that the service executes in
the compute emulator, but your data is hosted in Windows Azure. Once
local testing is complete, using mixed mode lets you test your service
in a staging environment.

After you debug your service in mixed mode, you are ready to package it for deployment to Windows Azure.

Once
debugging is complete, use the CSPack Command-Line Tool to package your
service for deployment to the Windows Azure staging or production
environment. The cspack.exe utility generates a service
package file that you can upload to Windows Azure by using the Windows
Azure Platform Management Portal. The default package name is .cspkg, but you can specify a different name if you choose.

If you
have installed the Windows Azure Tools for Microsoft Visual Studio, you
can package and deploy your service from within Visual Studio. For
more information, see How to Publish a Windows Azure Application using
the Windows Azure tools.

After
you package your service, you can use the Windows Azure Platform
Management Portal to create a hosted service that you can deploy to the
Windows Azure staging or production environment.

You will need to upload two files:

The service package file that you created with the cspack.exe utility.

The service configuration file, which provides configuration values for your service.

When you
upload the service package and configuration file, you will be
provided with an internal staging URL that you can use to test your
service privately in the Windows Azure staging environment. When you
are ready to put your service into production, swap the service from
the staging URL to the production URL.

How to Create a Hosted Service

Once you have written your Windows Azure application, you must deploy it to a Windows Azure hosted service.

On the ribbon, click New Hosted Service. This will open the Create a New Hosted Service window.

On the Create a new Hosted Service window select a subscription to add the hosted service to from the Choose a Subscription dropdown.

In Enter a name for your service, type a name for this service. This will help you identify this particular service when you have deployed multiple services.

In Enter a URL for your service, type a subdomain name to create the URL at which your service will be available.

If you
have created Affinity groups and you want to assign this service to a
particular group, click the radio button next to the Affinity Group
dropdown. Otherwise leave the default setting of No Affinity.

From Choose a region select the region

If you are not deploying a package to the service at this time, click Do not deploy, and then click OK.

If you are deploying a service at this time follow the procedure in the next section.

When you
deploy a service you can choose to deploy to either the staging
environment or the production environment. A service deployed to the
staging environment is assigned a URL with the following format:
{deploymentid}.cloudapp.net. A service deployed to the production
environment is assigned a URL with the following format:
{hostedservicename}.cloudapp.net. The staging environment is useful as a
test bed for your service prior to going live with it. In addition,
when you are ready to go live, it is faster to swap VIPS to move your
service to the production environment than to deploy it directly there.
For more information on swapping VIPs

In Configuration file, click Browse, and then click the ServiceConfiguration.cscfg file.

Click OK.

Delete a Hosted Service from Windows Azure

Use the
following procedure to delete a hosted service from Windows Azure by
using the Windows Azure Platform Management Portal. Before you can
delete a service, you must delete each of its current service
deployments.

To
delete a hosted service, you must be either the service administrator
or a co-administrator for the Windows Azure subscription that was used
to deploy the service.

The
items list displays all hosted services for which you are an
administrator, sorted by subscription. Beneath each service, each
service deployment is listed. Before you can delete the service, you
must delete all current deployments of the service.

If needed, delete each service deployment for the service.

To delete a service deployment:

In the items list, expand the subscription, expand the hosted service, and then click the service deployment to select it.

Stop the service deployment if it is running. To do this, click the service deployment to select it. On the ribbon, in the Service Deployment group, click Stop. Then wait until the service deployment's status changes to Stopped.

With the service deployment still selected, on the ribbon, in the Service Deployment group, click Delete. The deletion process might take several minutes to complete.

To delete the service, in the items list, click the service to select it. Then, on the ribbon, in the Service group, click Delete.

The Cloud Standards Customer Council has recently released a white paper on Security in the Cloud.

Reading through the paper this section highlights the challenges to be addressed: “As consumers transition their applications and data to use cloud computing, it is critically important that the level of security provided in the cloud environment be equal to or better than the security provided by their traditional IT environment. Failure to ensure appropriate security protection could ultimately result in higher costs and potential loss of business thus eliminating any of the potential benefits of cloud computing.”

The paper calls out 10 steps one must take to ensure that security in the Cloud is aligned with one’s business requirements.

Free access to the white paper is at http://www.cloud-council.org/security.htm .

"Ruby on Rails seems to be the new hotness in the world of web development, right up there with Ajax. IBM DeveloperWorks has a helpful howto on how to bring the worlds of Ruby on Rails and your DB2 framework together. From the article: 'Because Rails emerged from the open source world, until recently you had to use MySQL or PostgreSQL to work with it. Now that IBM has released a DB2 adapter for Rails, it's possible to write efficient Web applications on top of your existing DB2 database investment.'" Happy!

The administrative task scheduler (ATS)
enables DB2 database servers to automate the execution of tasks. It
also provides a programmable SQL interface, which allows you to build
applications that can take advantage of the administrative task
scheduler.

The administrative task scheduler
manages and runs administrative tasks, which must be encapsulated in
either user-defined or built-in procedures. You can add, update and
remove tasks from the scheduler's task list by using a set of
built-in procedures. You can also monitor the task list and the
status of executed tasks by using administrative views.

DB2 Process responsible for
running scheduled jobs:-

Scheduled tasks are executed by the DB2
autonomic computing daemon, which also hosts the health monitor and
automatic maintenance utilities. This daemon appears in the process
list as db2acd and starts and stops in conjunction with the database
manager. Every five minutes the DB2 autonomic computing daemon checks
for new or updated tasks. To do this, it briefly connects to each
active database and retrieves the new and updated task definitions.
The daemon does not connect to inactive databases. To ensure
scheduled tasks are executed as expected, the database must remain
active and the task's earliest begin time should be at least five
minutes after it is created or updated.

Internally, the
daemon maintains a list of the active tasks. When a task's scheduled
execution time arrives, the daemon connects to the appropriate
database and calls the procedure associated with the task. If the
database is not active, the daemon will not execute the task; it
writes an ADM15502W message in both the administration notification
log and the db2diag.log. If, for some other reason, the daemon fails
to execute the task, an ADM15501W message is written to both the
administration notification log and the db2diag.log. The daemon then
automatically attempts to execute the task every 60 second

The daemon will
never execute a task if a previous instance of the same task is still
outstanding. For example, assume a task is scheduled to run every 5
minutes. If, for some reason, the task takes 7 minutes to complete,
the daemon will not execute another instance of the task at the next
5 minute interval. Instead, the task will run at the 10 minute mark.

The
administrative task scheduler operates independently of the IBM Data
Studio and Database Administration Server (DAS). It is included in
DB2 database servers and is disabled by default. In order to
successfully execute tasks, you must set up the administrative task
scheduler.

Setting up the administrative
task scheduler

Set the DB2_ATS_ENABLE registry
variable to YES, TRUE, 1, or ON

For example:

db2set
DB2_ATS_ENABLE=YES

Create the SYSTOOLSPACE table
space

Like other DB2
administration tools, the administrative task scheduler depends on
the SYSTOOLSPACE table space to store historical data and
configuration information. You can check if the table space already
exists in your database system with the following query:

SELECT
TBSPACE FROM SYSCAT.TABLESPACES WHERETBSPACE
= 'SYSTOOLSPACE'

If your database
does not have this table space, you must create it.Otherwise you
receive an error message when you try to add a task to the
administrative task scheduler:

SQL0204N
"SYSTOOLSPACE" is an undefined name. SQLSTATE=42704

Any user that belongs to the SYSADM or SYSCTRL group has authority to
create this table space. For instructions, refer to "

SYSTOOLSPACE
and SYSTOOLSTMPSPACE table spaces".

For example:

CREATE
TABLESPACE SYSTOOLSPACE IN IBMCATGROUP

MANAGED BY AUTOMATIC STORAGE

EXTENTSIZE 4

Activate
your database. Your database must be active for your tasks to
execute on time. The best way to do this is to use the ACTIVATE
DATABASE command. Alternatively, you can keep a database active if
you maintain at least one database connection at all times.

Results

Once the
administrative task scheduler is set up, the DB2 Autonomic Computing
Daemon starts checking for new or updated tasks by connecting to
active databases every five minutes.

Adding
jobs to ATS

A job is created
by adding a task to the ATS. This can be done in two ways

a)
ADMIN_TASK_ADD procedure to define new scheduled tasks

b) DBMS_JOB
module

Lets see how to
schedule jobs using each one of them.

ADMIN_TASK_ADD
procedure – Schedule a new task

The ADMIN_TASK_ADD procedure schedules an administrative task, which
is any piece of work that can be encapsulated inside a procedure.

Syntax:

ADMIN_TASK_ADD--(--name--,--begin_timestamp--,--------------->

>--end_timestamp--,--max_invocations--,--schedule--,------------>

>--procedure_schema--,--procedure_name--,--procedure_input--,--->

>--options--,--remarks--)

Usage :

Example: Consider a stored procedure(proc_insert)
which has to be scheduled to run at 10:45 AM every day.

This article describes the integration between SPSS and
Cognos solutions. The integration with Cognos comes in the form of being able
to directly connect to Cognos as a data source for SPSS modeler and also being
able to export the results to Cognos directly to allow Cognos to report on the
results. The benefits of using Cognos as a source is that your data is all in
one place and its formatted and tidied up suitable for SPSS analytics.
Exporting the results to Cognos has the benefit of being able to report on the
results using the familiar.

Cognos reporting formats and various different types of Cognos reports, such as dashboards and active reports. Below figure showcases one such context diagram and its integration with SPSS:

To implement this is reasonably straight forward and
involves selecting the Cognos source node or the Cognos export node and dragging it into your stream as shown below:

You then have to edit the Cognos node and add the Cognos
server IP address details and login credentials. As shown on Figure below –

You also need to make an ODBC connection to the Cognos Data
warehouse from the SPSS server. This needs to have the same name and details as
the Cognos data source. If these details do not match on the Cognos and the
SPSS server the integration will not work.

Below figure shows the details:

The benefits of using Cognos as a source is that your data
is all in one place and its formatted and tidied up suitable for SPSS
analytics. Exporting the results to Cognos has the benefit of being able to
report on the results using the familiar reporting format and various different
types of Cognos reports that the company uses.

Note: To either
export to Cognos or use Cognos as a data source an ODBC connection to the
Cognos data warehouse must be established as well as the Cognos Server
connection.

Summary:

In this way, we can integrate IBM Cognos BI with IBM SPSS product and get reporting data from prediction and analysis from SPSS.

Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.

The Summit is free to attend for all those attending IBM Innovate. Join us for an exciting session and refreshments to start your attendance at Innovate 2013. For more information and to RSVP visit http://ibm.co/16jTusU

As one of the core developer of DB2 Connect CLI team, I got an opportunity to work on supporting generic special registers feature. Idea behind this blog is to spread some of the benefits and usage to help application development community understand it better to leverage the same.

Though focus of this blog would remain CLI centric, similar concept exists in other client drivers like IBM DB2 .NET provider and IBM JDBC driver (aka JCC).

IBM Data Server Driver configuration file (by default named as db2dsdriver.cfg) is catching its popularity among the customers due to its capability of allowing different DSNs and database properties configuration in a central repository manner. In addition, being in XML format, it takes a less of an effort for any user to get used to such configuration files. In DB2 Connect V10.1 Fixpack 2, CLI added new capability to db2dsdriver.cfg by allowing users to set special registers generically.

Before I go deep into the feature explanation, let me begin with answering few basic questions:

What are special registers?

A special register is a storage area that is defined for an application process by the database manager. It is used to store information that can be referenced in SQL statements.

To know more about special registers with examples, refer to the following link:

What is the existing method of setting special registers from client applications?

There are set of special registers which can be set (or updatable) by the client applications. Application can modify such special registers programmatically using “SET” SQL statements. There are few special registers for which DB2 CLI provides connection level keywords. Application can set these keywords either via db2dsdriver.cfg or db2cli.ini configuration files.

Limitations using existing method of setting special registers:

Setting special registers programmatically expects modification of the application source code and recompile each time special register needs added/removed/modified. Also, this needs to be taken care in all impacted application programs.

Using special registers which can be set as CLI keywords can be a better approach than former, but with limited list of such keywords, applications do not get complete solution. CLI can be enhanced to support requested special registers as a keywords, however with data server introducing new special registers at each release, this remains an ongoing solution. This expects users to upgrade their client drivers to be able to get newer special register support as keyword.

What is the newer mechanism CLI provides to address above situation?

To overcome the drawbacks of both the above approaches, it was desired to have a more generic solution to be developed. As a result, CLI has introduced a unique section of special registers viz. <specialregisters> in the configuration file db2dsdriver.cfg. This section allows users to specify a list of special registers that they like to configure. Based on the need, <specialregisters> section can be added at a DSN level or a database level or even globally.

During each connection to a given DSN or a database, CLI reads through db2dsdriver.cfg and processes <specialregisters> section in the following manner:

- read each special register name and its value from <specialregisters> section of a given DSN or a database

- “without scanning/interpreting” form a chain of special registers to be sent to the connected data server.

- upon the first SQL of the connection, flow chained special registers to the server

- server will process each special registers of the chain (along with the 1st SQL of the connection) and set it appropriately at the server.

As we can see from the above logical flow, with this feature, CLI has no dependency to know the special registers to validate. It will simply flow the entries from <specialregisters> section to the server and let server do necessary validations. Another benefit we can see here is because flow of the special registers is chained together along with 1st SQL statement of the connection, network trips to set the special registers is saved significantly now.

When server upgrade occurs and user application likes to set newly supported special registers, with this new feature of CLI, all user needs to do is to add that special register in their <specialregisters> section! As we can see, no driver upgrade is needed here in order to use newer special registers.

Illustrating usage of <specialregisters>

Having given some background, I can now proceed with the working of this feature. Let’s begin with adding <specialregisters> section to existing / new db2dsdriver.cfg configuration file:.

1. Special Registers applicable across all DSNs/databases ( residing under global <parameters> section)

CURRENT DEFAULT TRANSFORM GROUP = 'MYSTRUCT2'

CURRENT LOCALE LC_MESSAGES = 'en_CA'"

2. Special Registers applicable for DSN = sample

CURRENT SCHEMA = 'MYSCHEMA'

CURRENT DEGREE = 'ANY'

CURRENT DEFAULT TRANSFORM GROUP = 'MYSTRUCT2'

CURRENT LOCALE LC_MESSAGES = 'en_CA'"

3. Special Registers applicable for database = sample2

CURRENT SCHEMA = 'MYSCHEMA1'

CURRENT DEGREE = 'ANY'

CURRENT DEFAULT TRANSFORM GROUP = 'MYSTRUCT2'

CURRENT LOCALE LC_MESSAGES = 'en_CA'"

The above configured special registers for relevant DSNs/databases come into effect with the first SQL statement given post connection. It is at this point the special register settings are applied at the server.

In the above application logic, "INSERT" is the first SQL statement post connection. Along with this SQL statement, the effective special registers list (as listed in the db2dsdriver.cfg) is formed and these special registers get set at the server. In case any special register setting at server has resulted in any warning or an error, those will be chained to the result of 1st SQL’s response. Application can call SQLGetDiagRec() API to retrieve any warning or error details to diagnose the problem.

Where I cannot use this new feature?

To enable client info properties, it’s not recommended to use <specialregisters> section. Existing mechanism either via CLI keywords or environment/connection level attributes can be used instead.

If application logic desires to set special registers during the connection (not at initial phase of the connection), or if they like to change the special registers in between, then setting special registers programmatically is the only way. New feature is useful only as initial value of the special register for the connection.

In summary, as an application user, one can get below benefits with the new feature:

1. Savings in time and network utilization by reduction in network flows

Reduction in network round trips between client and servers since most optimal DRDA protocol is used while flowing special registers set information to the server.

Moreover by chaining set of special registers along with 1st SQL of the connection saves another network round trip by using piggyback mechanism.

2. Less maintenance and upgrade of the driver:

The new approach avoids necessity of driver level upgrade just to exploit any new server special register. All users need to do is add the new special register entry in the <specialregisters> section to the existing drivers’s db2dsdriver.cfg file (minimal driver level requirement is V10.1 Fixpack 2). Knowing many big organization having thousands of client drivers installed at each workstation, this saving brings lot of relief to them.

3. Centralized maintenance:

Using central configuration method for db2dsdriver.cfg, users can now have much controlled manner to add/remove/edit the special registers for their applications. Also, with flexibility of using <specialregisters> under DSN, database or global level, user can tune their need quite easily.