Recently I was facing a problem in one of my experiments with Bluemix, where I was trying to connect to a local database on my system from a Bluemix application. I was failing to do so and the reason being that my system was behind a firewall of my organization. Bluemix being on the open network (internet) can not access network behind firewall. Offcourse this is what most of the organization's objective is, for security reasons and to make sure no unwanted access happens on their network.

Being in the cloud world now where organization are planning to have their application over cloud, one of the requirement will be to have some communication happen between their application on cloud and application/tools/databases etc inside their premises. Organization can not do away with firewall security too. So what is the solution here.

Cast Iron live cloud integration add on in Bluemix actually help you achieve this, communication in a secure manner without compromising on your firewall security. Lately I was playing with this service and found it really worth in achieving the kind if integration most of the organization require.

Cast Iron live allow to create a secure connector between a system behind firewall and cast iron service on the Bluemix. This service that communicate with the application. So in summary cast iron live create a secure channel between the application and the system behind firewall in such a way that security is not compromised.

Overall the process includes creating a cast iron service on Bluemix, installing the secure connector in the system and configuring it, creating end points and defining inputs/output of various endpoint defined using cast iron studio, deploying it and using the end points to access the system in your application. Cast iron studio supports various end points including database (jdbc access), HTTP, HTTPs etc. I am planning to cover each of these steps in my next blog entry. Till than keep reading and experimenting with Bluemix and various services available.

Last week I got a request from one of my colleague to help her in deploying one of their application to Bluemix. This application is an enterprise application and I have been given an ear file and a readme file which just provide me step to step instruction of installing the application in a local liberty profile.

To start with, the task seems to be easy as we already have the step to step instruction for local liberty profile and Bluemix use the liberty runtime only. My understanding was that there wont be much differences accept I don't need the liberty installation (as I will get it on the Bluemix), but as I moved on I started seeing some challenges which is very much particular to cloud environment. These challenges has changed my view and clarified some differences between, how the application should be developed and deployed in a cloud environment Vs in premise development.

The first step I did is to push the ear file to Bluemix using cf push and that worked smoothly. My web module is up and running within couple of minutes. The next task was now to have the database up and running and configured with the application. I have added an sqldb service and bound it to the application. Till now its was smooth selling, but the real challenge came now. How to make sure that application configured and connect to the database ? Literally without changing the application code ?

If I think of how we do it in a local environment, we do put the details of database like db name, user name, password, host name, port number etc in server.xml file of the liberty profile and give it a JNDI name. This JNDI name is then referred in the application to connect to the database. This server.xml file is part of the liberty server and not application. In cloud environment we generally don't get access to the server and hence has no control on the server.xml file of the default liberty runtime on Bluemix. The step by step instruction given to me says to copy, server.xml file zipped along with application, to my server profile. But this is not possible when working with the cloud. Same problem goes with the database driver jars. In local installation we generally copy them to the server library. In case of cloud we need to make it part of the application library (under WEB-INF/lib). Both of these alternatives are not possible with the standard liberty build pack and with a ear which can not be changed.

After some understanding of build pack and how I can push changes to build packs, I got the solution to the problem. So here is what I did.

Install a local liberty server and create a defaultProfile.

Install my ear file in the local liberty profile.

Edit local server.xml (generally found in LIBERTY_HOME/usr/servers/<profile name>) file to create the JNDI name used in the application. I actually copied the data source cconfiguration from the server.xml file given to me as part of the application zip and edit it to add credentials from the sqldb service I created on Bluemix keeping the JNDI name same (as that is used in the application to connect).

Used server packagecommand to zip the usr directory which contains my server runtime and the application details. Command used here is

LIBERTY_HOME/bin/server package <profile name> --include=usr

this will create a zip file under LIBERTY_HOME/usr/servers/<profile name>

Now I have used cf push command to push both the liberty runtime and application together from LIBERTY_HOME/usr/servers/<profile name>. Command user here is

cf push -p <profile name>.zip <application name>

once the push command is successful, your application will be able to use the JNDI name and connect to the new Bluemix sqldb service.

So what this solution is actually doing. When you run the server package command with usr directory included, It actually zipping all the server runtime and application in one folder. When you push this to Bluemix, Bluemix actually detect the server.xml file inside the zip and understand that the zip file contain both the application and the runtime and overwrite the existing configuration file (server.xml) with the one you have created and zipped.

The same solution works for the driver jars file too. You can add the driver file in your local liberty profile under shared resources directory (LIBERTY_HOME/usr/shared/resources) and this will be zipped as part of your server package. Upon cf push, will become part of your customized build back accessible to application. When you do this, make sure that your server.xml file actually has the correct path for these JDBC driver jars, other wise liberty will not be able to find it at the time connection request..

So this solved my problem and application was deployed successfully and was using my customized liberty build pack and sqldb service on the cloud.

This process can be used to do anything which require you to change your build pack including adding new liberty feature in your cloud liberty build pack.

Lately I am trying one of the service on Bluemix, Twilo which allow you to send and receive SMS through your application. For my on demand car pooling application I am planning to implement a OTP authentication by sending a one time password to user to validate there mobile number.

Have you ever done such a thing or if you have done anything on Twilo in the past, do share your experience. This will surely help me implement the same quickly.

Waiting to hear from you. In case i make some progress, learn things on this topic, will surely share it with you in my next bog entry.

I am sure you might have been wondering where I lost after writing a lot on exploring Bluemix and my answer is I was making a transition from exploring Bluemix to developing with Bluemix. After so much of learning with Bluemix, I decided to develop one real application to see how fast I can do that. Last 1 month or so was spent on this. Along with my office work I was spending around 4-5 Hours a week on this. The application I was developing was for on demand car pooling. So here is my experience.

Creating a Bluemix environment with Liberty runtime and database took me just a couple of minutes.

Setting up development environment with eclipse, installing CF plugin and setting up server connection on Bluemix took me a couple of hours.

Putting up FB authentication mechanism took me a day.

As I was quite new to Java script and I was planning to use the Google Direction APIs, it took me some time to learn the Java script concept.

Putting a basic flow of the application together took around 2-3 week (when I was spending 5-6 hours a week so effectively it was just 2-3 full working day job)

So overall it took me only a full week work to put this application together and make it live.

smartpoolers.mybluemix.net

I am sure You can guess now how easy it to put a live website/portal using Bluemix platform. Have you given it a try ? If no do try it out.

A couple of days back I received a query from one of my colleague on how one can actually access the Bluemix services from the application code. While I have some information covered this in one of my post earlier here

I thought of having a dedicated entry regarding the same as this is an important information from application development prospective.

So whenever you create an application in Bluemix, and bind a service to it, you need some access parameters (or any other information which allow you to consume the service in an application). Taking an example of a database service bound to my application, I will be needing URL, Hostname, userID and password information so that I can create a JDBC connection from my application. Bluemix create a environment variable VCAP_SERVICE in your application runtime to store such information. So if you are writing a Web Application, it will be under your Web Server (Liberty) runtime. To get this information, you can click on the runtime under your application and you will see this variable under environment variable. Below is the snapshot for this

You can also parse this information dynamically from you application (instead of hard coding these values). Here is the example code for the same

Now you can develop your application using Eclipse and use these credentials to connect to the service (database in this case) and work with it.

A little late but I am back with another experiment I did some days back. I just thought of developing a small application using Eclipse framework and deploy it on Bluemix.

To achieve that the first step is to install a plugin for Cloud Foundary (the open standard cloud API, bluemix is based on) so that we can work with a web server instance on the cloud directly. Below is the step to step to guide on how once can install the required plugin and create a server instance. I am assuming that you are aware of Eclipse environment for Java.

In your eclipse environment, go to Help--> Eclipse Marketplace

Search for Cloud Foundary.

You will see the list similar to below snapshot

Select the plugin Cloud Foundry Integration for Eclipse 1.6.1 or similar to install. Just follow the simple steps to install the plugin. After the install it might ask you to restart the Eclipse.

Once the installation is complete and Eclipse is restarted, you will be able to create a new server based upon Cloud Foundry (Bluemix). Create a new server.

Provide you Bluemix account detail (email and password).Make sure in the drop down menu of URL, you have selected the URL you added in the previous steps. Click Next. This will validate your account and provide you with a list of spaces.

Please note you may not get the list of spaces in case you have only one default space.

Click next and it will give you an option to add any existing application to the newly created server.

Click finish and you will be able to see the server in the server list. You can deploy any application on this server now just like you do for any other server. After deploying you can see these application from bluemix interface too.

So now you have your server up and running and your Eclipse environment is ready to use the bluemix server. Just go ahead develop and deploy your application.

In last one week, I have a chance to visit one academia institution and one corporate to evangelize BlueMix. I was really thrilled to see the the kind of interest it is generating among various developers and student community.

In ABES, around 60 Faculty members, mostly from IT and Computer Science department, attended the session. One of my colleague, Nihar covered the basics of BlueMix, Its architecture and its benefit, I covered the practical session where I showcased the various User Interface components and how to develop a basic web application using Eclipse Plugin for cloud Foundry. I also covered how a developer can move the local database to the cloud before It deploy a web based database centric application

In HCL Technology, there was around 5 0 developers who attended the session. one of my other colleague, Neeraj covered the basic concepts, while I covered the hand-on, Cloud foundry concepts and a how to develop the application and deploy database.

Some of the questions came from these visits are listed below. While I am listing only questions here, I will try to answer some of them in my future posts.

Cloud in general generates a lot of questions around security and for BlueMix too, this was the concern.

Scalability was another concern/question where a couple of folks was interested to know how the application can be scaled to multiple nodes dynamically.

There was also a question of how a C developer can benefit from the BlueMix specially in mobile development area.

There was question on how once can use the mixed environment where one can use in premise (local) product and integrate them with some of the cloud services to develop a solution.

There was also many questions around Dev Ops services and continuous development approach.

There was also question around the collaboration among teams when working on one application. How one can manage access control for users for various activities.

Overall the sessions was really good and intractive. There is a lot of interest for trying out BlueMix and I am sure many of them might have already started exploring it.

Today, I was talking to one of my colleague on how a team of developers can work on a BlueMix application collaboratively and we found the concept of Spaces in BlueMix worth using for such a collaboration.

When you login to BlueMix, on the left hand side you will see Space.

If you click on Manage Spaces, it allow you to create your own space.

In any space, you can define users and domains. The user should be an existing user of BlueMix. Once you have the user added, he/she will be able to see any work/application you have on your dashboard under that space. Any changes you make will be reflected to the other users and vice versa.

This way one can collaborate and work on a common application and see each other changes. This also help when one user leave the project and others join. You can keep and adding users for sharing space.

As I promised, today I will talk about how to reverse engineering your existing database to extract the required DDL and deploy it on the cloud. For the folks who has worked on InfoSphere Data Architect, this is just 10 mins jobs and for the people who has not, add on 10 mins more for learning :). Yes only another 10 minutes.

So Lets get started. As a prerequisite, do download and install the InfoSphere Data Architect product.

Below is the step by step process.

Create a connection to the local database using InfoSphere Data Architect. Snapshot below show how to Create a new connection.

Provide access details.

Click on test connection to check the access parameters is correct and connection is successful..

Creating an application with a database service provide you the access parameter to connect to the BlueMix database as part of VCAP_SERVICES environment variable. Rest of the steps remain same as above. Do test the connection to verify the access details.

Create a data design project as shown below.

Create physical data model inside the newly created project

Select create from reverse engineering for the new physical model

On the next page select database and select the database name (sample in my case).

On the next page Select the schemas you want to move to BlueMix. Please make sure you select only user defined schemas only.

Select the objects you like to move. Please note that you can not select schema, privileges, tablespaces etc due to the tenancy model of the BlueMix, so avoid selecting these objects.

Select the db on BlueMix and run. Now this will generate the objects on the database in BlueMix. You can login to the environment ad just out in the db console for the new objects.

So now you know how you can reverse engineer the local database in your system and deploy it on the cloud. So Happy reverse Engineer and do let me know if you face any issue following the steps.

Next experiment I am planning to do is to create a Log in page for my application which take advantage of Facebook login instead of creating my own implementation. So stay tuned and enjoy BlueMixing :).

Can not stop myself to share it with all of you. One of my team mate has developed a simple tic-tac-toe web app on BlueMix and he claims to have build it in 10 minutes. No worry about host sever, just deployed it on a readymade deployment infrastructure ie BlueMix. Here is the application link

I know I promised in my previous entry that I will talk about how to deploy database on cloud using reverse engineering. However before I explain that, I really want to let you know about how to create an application in the BlueMix environment. This is also important in a way that you need to create one application to know the database connection credentials. These credentials in turn will be required to connect to the db from remote client like InfoSphere Data Architect which is going to help us in doing reverse engineering and deployment.

So lets learn how to create an application.

When you log in to BlueMix environment, you will see a Dashboard with list of all your application and services (If you have not created any yet, you will see an empty dashboard) . Under the Catalog section, it list down all the services, runtime and boilerplates ready to be consumed by an application. While the name service and runtime is quite self explanatory, Boilerplates is something unique and interesting concept. It actually combine runtime and services together for a particular type of application and give it to you when you create an application using one of these boilerplates. Below is the snapshot of available boilerplates as of now.

To explain one of them, Java+DB Web Starter boilerplate will give you ready-made platform with integrated services of Java Runtime, Database and Application Server. So if you create an application using this boilerplate, all these will be given to you directly.

To create one application you can click on any of these boilerplates based on you need. It will prompt you to give a name of the application (which will also act as your domain name). Below is the snapshot of one of the application I have created using this boilerplate.

Clicking on the Application (ManojApp in this case), will give you the details of the services this application is using. For this case we have an application server and a database service (sqldb)

Clicking on the server (Liberty for Java) )will give you finer details like resource consumption, instance details and an important environment variable called VCAP_SERVICE. This environment variable has the details of every service this application is consuming including credential details of your database which you can use to connect from outside world (and from application you are going to write)

In case you don't want to create an application from ready-made set of services and runtime (Boilerplate), you have also an option of creating it from a scratch. Offcourse in this case you need to add/bind services you want for this application explicitly. You have create an application and create a service section in your dashboard for this case. Whenever you create a service, you will have an option to bind it to your existing application.

In one of my future post (may be after I talk about the database deployment using reverse engineering), I will talk about how to create an application using Eclipse and deploy it on BlueMix environment with span on 10 minutes so Keep tuned :)

I am sure many of you might be excited after hearing,listening and reading about BlueMix offerings on various platform including this Blog. I have done many experimenting on DB2 Blu Acceleration offering on BlueMix and will be sharing my experience with you this Tuesday (06th May 2014). You are all invite to attend this Webinar.

Webinar on DB2 Blu on the Cloud : Offering from BlueMix
Event Date: May 6, 2014
Event Time: 3:00 PM - 4:00 PM (India Standard Time)
Hosted By: APRA SAHNEY (IBM)
Presented By: MANOJ SARDANA (IBM)
BlueMix is cloud based platform from IBM where you can develop & deploy an application on a ready made infrastructure without worrying about memory and storage resulting into faster development and deployment.

Blu Acceleration is been provided as a service on BlueMix which is based on DB2 Blu technology and provide a faster in memory analytics capability. This session will cover how one can leverage this service to take benefit of DB2 BLU Acceleration service on BlueMix. Also touch base on various option of deploying your database on BlueMix and make use of it in your application.

Speaker: Manoj Sardana, Staff Software Engineer, IBM
Manoj is an IBM certified advance administrator and application developer for DB2 LUW for V9. Has both QA and development experience. He has authored/co-authored multiple articles and books and has presented in various conferences like IDUG and IMTC.
Blog : https://www.ibm.com/developerworks/community/blogs/msardana/?lang=en

One of my basic experiment on the BlueMix was to create a database and see if I can move my local database to the cloud and free up my resources. I knew that I can create my own database and work with it in Blu Acceleration offering, But than for an existing database users, chances are high that there is already a database in local system. So the first thing one has to do it to move the existing database into the cloud and free his local system/infrastructure. I decided to try it out.

I didn't have a working database in my system so I thought of creating one for this experiment. I have a DB2 10.5 installed on my system so I decided to create sample database using db2sampl tool which comes along with the server installation. This tool create a sample database with some tables, indexes, views, stored procedure and many more objects and load data. So once can say that it is similar to a production database but of minimum size.

After logging into the BlueMix portal and adding Blu Acceleration offering, I get a console to manage the service. Below is the snapshot of the Blu Acceleration Web console

So I have the option of designing my database or using existing data models. Under “design your database”, it do give option to use product like InfoSphere Data Architect to model your database locally and deploy it back on the cloud. I also have option to load data from various sources to the existing table (which I can do only when I have my tables created).

Being a public offering, whenever you add Blu Acceleration service to cloud application, a database get created or get shared with you with a user id. You are given access to a schema and you can create object in this schema. This means that one doesn't have the full control over the database. This prohibits user to do a recovery of the whole database as this might disrupt other users. BlueMix gives you the following options to actually move your database to cloud.

Create your DDLs using tools like db2look and run these DDLs on the cloud database connection. This also means that you may need to individually load the tables. This process seems quite complex for big size databases

Once you have the DDL extracted from my local system, Go to the “Work with Table “option as shown in the below snapshot

One you select the option, You see the following screen

So the next step was to run the DDLs, Click on + sign gives a box to run the DDLs

While using this option, one may need to make sure that you create tables and objects in a way that objects with dependencies are created only after the object on which they are depend on. So you may need ti divide your DDL in that way.

It also gives you an option to reverse engineering your database to create database design using InfoSphere Data Architect and deploy it on the database in the cloud. This is something which one may need to follow in case database has many objects. Loading of data will be a separate tasks in this case too.

I am sure many of you might have heard about the BlueMix and might be interested in knowing what is it and how this can help you.

BlueMix is cloud based platform from IBM which provides various products and runtimes as a service just by clicking some buttons and without you installing them on your own machine. This allow you to develop and deploy your application on a ready made infrastructure without worrying about memory and storage resulting into faster development and deployment.

To get started, The first thing one need to do is to register to Bluemix Beta program and get the access. I got my request approved in a couple of days time.

Being a database guy, I was more interested in exploring the database and analytics related offerings. From last couple of week I spent a lot of time playing around with the portal and here is my experiences on one of the offering BLU Acceleration.

BLU Acceleration : this is an offering based on the BLU technology introduced by IBM sometime last year and prove to be providing a faster in memory analytics capability.

I found that in Bluemix, A lot of ready made industry based database model are available which one can use or customize using the InfoSphere Data Architect and deploy it back. You can create your own database objects by passing in the DDL statements too. One you have the database ready, you can load in the data from local system or from Amazon S3. You can also find some ready made Cognos reports based on the models available or design your own reports based on your database. In case you have used Cloudant service you also has an option to sync your database with Cloudant. You can also independently work with table and run any query just like you do it on your own system. In case you like to design an application, you get the URL and access parameters which you can use to connect to it an your application.

So in summary, you get the BLU acceleration based database service ready to use where you can deploy your database, load data and start doing the analysis without the hassle of any installation locally. This indeed gives a very quick start to the development and can reduce the work of infrastructure setup so that developer concentrate on the application requirements and logic and not worried about the setups.

I will keep sharing more experiences specially on my real time working experiments on BlueMix. So keep tuned.

ODS 2.2 brings a lot of new functionality and features which will allow you to integrate the complete data management life cycle. Here is the link to the announcement letter which also gives the details of the new features in this release. Annoucement

Here is the blog entry(whats new in ODS) from Sonali on this release explaining what new this release.

I was quite happy to see a overwhelming response to the TGMC 08. We had a very tough time to evaluate all the projects and finding the best out of them. Last month we had the prize distribution ceremony here in Bangalore where all the winning team in different categories and the faculties are invited. It was a great to see that this initiative brought the best out of these college student and the faculty member and given a chance to collaborate with the industry giants.

This ceremony also inaugurated the TGMC 09 site. The registration for the TGMC 09 is open. I suggest all the college going students in IT field register and make the best out of it. Here is the site link to register.

I am sure most of us in IT field knows about the developerworks.The largest technical library in the world with about 8 Million user across the world, Google for any information on IBM or open source technology, and you will end up getting into developerworks links.

Here is another initiative from developerworks team to make developerworks more personalized by creating your own profile, participating in community discussion, connecting to like minded people and your peers and at the same time get updated on your favorite blog entries, RSS feeds and recent activities in your favorite areas. What else you need!!! all the information at the one place.

I am not sure if you are aware of this competition. If not I will strongly recommend you to visit http://xmlchallenge.com/ and select your country.

Take a quick quiz and start participating in the competition. The competition is for the people who are either database professional or students who are learning database technologies. You have the option of choosing administration or the programming contest. There are a lot of cool prizes to be own at every level of the competition.

As I talked about making the JDBC application run faster in my last entry, a new version of the DSD (Data Studio Developer) is announced. This new release will have a lot more exciting feature apart from what I described in my last entry. Modifying the SQL without changing the source code, Restricting the SQLs to a set to avoid SQL injection, Ehnaced problem determination are some of them. You can find the details of these feature in the below article from sonali

Have you ever heard/want that your JDBC application run the SQL in static mode ?Have you ever encounter the problem of not able to map the problematic SQL to the source code from where its originated ?Have you ever want to know how many times an SQL get executed by your application ?Or Have you ever want to know how many SQL are there in your application ?

If you face any of these problems and want to know the how you can achieve these goals, try out the new pureQuery support in the IBM data studio product. The client optimizer component actually allow you to capture all the SQLs in you application (even when you don't have the source code of your application) and then allow you to create packages at the db2 sever for those SQLs. Later when your application runs again, you can execute the same SQL in static way. This will not just improve the performance but also give you the benefits of static SQL like different security model, safeguard from SQL injection, control over the access path etc. Capture file also has additional information like source code stack trace for the SQLs and execution count. The tooling in data studio automate the look up for the SQLs in the source code on a click of a button. Below are the links to some article which talk about this product in detail.

http://www.ibm.com/developerworks/edu/dm-dw-dm-0808titzler-i.html

http://www.ibmdatabasemag.com/showArticle.jhtml?articleID=207801106

the performance numbershttp://www.ibmdatabasemag.com/showArticle.jhtml?articleID=208802229

Yesterday One of my colleague pinged me regarding a problem she was facing with the DB2. As per her description, the application is trying to reset the database (which I interpreted as trying to clear out data so a lot of delete/updates), and during the operation, she is getting sqlcode of -964 which says transaction full. As per her she has done this operation a lot of time and never faced this problem. Whats new now ? what changed that the more logs then usual are getting generated ?. I saw the diag.log entry which directly indicating the same reason "transaction log full". I suggested her to increase transaction log my increasing the logfilesz. After trying a size of 10 times the the normal setting, the problem still persist. We tried increasing the number of primary log files too but the same result. This gave me a doubt that there is something wrong with the application. As she didn't have the code for the application, I suggested her to try out the infinite logging option. Unfortunately, that too failed as its started giving error disk full. There is surely something wrong with the application. When i saw the diag.log, here is the entries

These entries are coming repeatedly and increasing the size of the log and once the disk is full, it gives a dump and shut down the database. The path mentioned in the logarchmeth1 is a valid path and accessible by the user. I am not sure which path is invalid here.

Anyway reinstalling the application (which will create a new database too) solved her problem but this is not always possible specially in real time. Do we have any tool which will recognize the recursive pattern in the log file (it might be caused by a recursive pattern in the application code) and do some corrective action instead of just keep feeling the log untill disk full giving an impression that the system is hanged. Your suggestions are welcome.[Read More]

Today I saw a very cool widget from meebo. I saw it in Leon Katsnelson blog on freeDB2.com. Meebo is a site which gives you a consolidated view of all your messengers like gtalk, msn, yahoo etc. It also gives you this widget which will give a flexibility to the people who are visiting your blog or site, chat with you live. You just need to create a meebo id and cut paste the HTML code into some space of your site. I just did that on my ittoolbox blog. Have a look how it works and happy chatting with the people who visit your blog and like to chat with you live.

Today I started reading the book "Database Administration : The Complete Guide to Practices and Procedures". I started with the 3rd chapter which talk about the data modeling. I am not sure if the data modeling work falls into the tasks a DBA should perform. At the same time as its one time activity, having a independent team to do the data modeling may result into no-work for the team once the database is implemented. As there are a lot of tools available to model the data, despite of that, is data modeling is really difficult ?. The book says efficient data model will allow you to minimize the data redundancy, maintain integrity, better data sharing and access and maintain consistency and while modeling you should consider how the data can be used in future instead of the current usage. But predicting about how the data can be used in future and really a difficult task. With new technologies coming every day, the usage of the data can go any way. Predicting the future use correctly will really make a data model a good one (of course, prediction should be right for that).I think now I need to start looking for some article which will give me some insight on how to predict the different manner a data can be accessed in future?. Do let me know if you know anything related.[Read More]

I got a chance to present in upcoming IDUG event this year again. Last year while I was totally in pureXML support for DB2, this year I am more towards the Java Programming and performance. This year my topic for the session is "Approaches to Java application development: Static Vs Dynamic" and I will be covering the different options a database programmer has while developing java application. There are a lot of technology available to interact with database from your Java web application. Two of them are EJBs, JPA. Taking an example of EJB, most of the time EJBs are generated by the tools and frameworks and doesn't give you the control on the SQL statements. IBM comes up with a new technology pureQuery, which allow developer a complete control over the SQLs and at the same time provide a high performance access to the database. To know more on that have a look at this article

In my session, I will be covering the different options available to database developer specifically DB2 database programmer. It includes the traditional ways JDBC along with the newly available ways like pureQuery. I will also be differentiating between the 2 ways to executing an SQL ie Static and Dynamic.

You can see the complete brochure of the event at the IDUG site here.

http://idug.org/wps/portal/idug

Registration process is also available here. So hope to see you there.

After almost 3 months of hard work in the office, I thought of taking a break and planned to make a trip to goa. Being very near to Goa in my college days, I never get a chance to see this most preferred tourist place in India. It was a off season there, but I planned this trip with some of my friends (or I will say they planned and I just joined). We spent 2 exciting and happening days there. From the first day vigator beach, where we are almost in between the high tide sea layers and still survived to the last days in forts and temples, we enjoyed to the fullest. Some of the things to remember apart from the tourist places are the play of cards in train, Tamil's singing, the view of Dhoodhsagar falls from the train and not to forget the video shoots from none other then me. It was a pleasant break to refresh all of us to start working again.

Yesterday, I read a small article on DBA traits in Craig Mullins blog. I agreed to what he mentioned in the article. Some traits he mentioned are organized and capable of succinct planning, adaptable, insatiably curious and should have some people skills too. He says "DBAs are expected to know everything about everything -- at least in terms of how it works with databases". When I shared this article with some of my colleagues, I was a little surprised by the responses. One of the question come out of the discussion is "Do DBA certification help becoming a DBA ?". Some says no as it will never give you a feel of real world problems. According to me it depends on who is doing the certification. A person who the real work experience may see no value in doing the certification however for a person who never get a chance to see the real world, A DBA certification may help at least giving a little insight into the kind of problem DBA face. It will at least enable then to see the problem in a right manner and approach in the right direction. I will say certification is the first step to become a DBA and a DBA will never want to go back to the first step. So value of certification depends on the step where you are in the DBA ladder.

While I was in discussion with my colleagues, I was looking for some book which will give me a complete list of task which a DBA need to perform. I found a book "Database Administration: The Complete Guide to Practices and Procedures" again written by Craig Mullins. Hoping that I will find the stuff which I am looking for in this book.[Read More]

Finally after 3 months of preparation, I have completed the DB 9 Advance DBA certification. It was my 2nd attempt. First time when I gave this exam, I was in an impression that it will be similar to the exams I gave earlier (DBA and Application Development) where I can just read certification guide and will be able to clear it but that was not the case here. When I attempted first time and failed by 4 marks I realized, I do need to read some concepts very much in details. Specifically HA and Performance. HA and performance makes 52% of the exam and you need to have good hold on these topics to get a good score. So I started reading the complete performance and HA guide. While reading I really felt that DB2 is not just SQL and some monitoring stuff. There are a lot in it. It also gave me the feeling how hard it can be to tune a database and how difficult it would be for a DBA to tune a database which is huge in size. This encourage me to read some more administration guide and go for DB2 problem determination certification. And yes that is my next goal for this year. Another certification and a new way to see a database tuning and problem determination.

In case you need some tips and questions, let me know. I will be happy to help who likes DB2 administration.[Read More]

I was a little busy from past sometime so didn't get a chance to post anything. There are a lot new things I read from last 1 month. There are some interesting posts from Susan Visser about the availability of the books in India, about the salary survey, about IDUG and some polls. I also hear to the podcast from John Boyer about the XForm 1.1. but one interesting things I tried during this time is the use of package cache in DB2 LUW.

In my last post I talked about the advantages of Static run over dynamic. A dynamic query goes through the same phase as of static. The only difference is that in case of static, DB2 saves the compiled SQL statements in catalogs while in dynamic compilation occurs every time. So if DB2 provides some mechanism to save the compiled SQL in memory and use this in future if the same statement encountered again, A dynamic statement can give performance benefit even better then static in some cases. Package cache serve this ppurpose. If you think, in your application most of the trasactions are repeating, increasing the size if package cache (pckcachesz DB config parameter) will allow DB2 to save the compiled dynamic statement in memory and reuse it. This may not give you advantage if your statement is not repeating. The first time the dynamic statement will take its own time as it need to be compiled but from 2nd time onwards you can see the performace banefit. The real questions here is, will it be the alternate to static statement? i think its not. I am not sure how many statement we can cache ? . Apart from that, this activity is totally depend on the DB Manager when it decides to cache and when its not. If there are a lot of statement compared to the size of the cache, there are the possibility that the compiled statements are overwritting each other and hence providing no banefit. Apart from that this cache is allocated whenever the database is initialized and freed when the database shut down, hence the statement need to be cached every time the database initialized again.

Any SQL statement within DB2 can be executed either in a static way or dynamic way. While static gives the performance bonus at runtime, dynamic gives flexibility to decide on the query at run time itself. Any SQL statement execution goes through various phases like compilation, symentic analysis, query rewrite, access plan generation and execution etc. The basic difference between static and dynamic execution is time when an SQL goes through these phases. Static behavior takes the benefit of SQL known at the compile time and hence an opportunity to create the access plan at compile time itself.So at runtime DB2 will only execute the access plan. In dynamic all these phases will happen only at the run time.So one can say that any SQL can be run dynamically but not all can run statically. For static behavior to occur, SQL should be known at the compile time and the object referenced by the SQL should exist in the database as they are required to complete the access plan generation phase at compile time.

DB2 provides different ways of running an SQL statement statically. While C language provide embedded C for static behavior, java provides SQLj language (Embedded SQLj in Java) for static execution. For any statement to run statically, DB2 need to store the access plan in the database so that it can be used at the runtime. The object which is used to store this information is called packages. For each static application, DB2 creates a package with contains the details of each access plan and the corresponding SQL statement. For example code snippets for static and dynamic execution have a look at sqllib/samples directory. It contains samples for CLI (under cli directory), embedded C (under c directory), JDBC (Under java/jdbc directory) and SQLj (under java/sqlj directory).

Last week, I got a question regarding the connection failure. This problem is very common in DB2 and mostly because of the TCP/IP communication was not enabled properly. I have suggested the following steps to test if the TCP/IP is setup up properly.

1. Check that the DB2COMM registry variable os set to TCPIP.2. The port number is defined properly in services file. Better to use port number directly instead of the name of the port.3. dbm cfg SVCENAME parameter is assigned the correct value.4. If the database is remote, it is cataloged properly.5. Try pinging the server machine. Check if the IP for the server is dynamic. In which case server IP can change resulting in communication lost.6. Try running LIST DATABASE DIRECTORY command and make sure that the database appear in the list.7. Try running LIST NODE DIRECTORY and make sure the server node is cataloged properly.8. Check if there is any firewall which is preventing the access to the server.9. Try connection to the server using TELNET and DB2 Port.10. Try connection to the database from CLP.

These are very simple tests to make sure network communication is fine. So next time you see any connection failure, try these tests to check the communication.[Read More]

Today while browsing developerworks, I came across this tutorial which explain how to setup your system to create an web application from free softwares suits from IBM ie DB2 express-C, Eclipse and WAS Community edition. The tutorial explain how to install, configure and integrate these component and start creating your application. I think this will be beneficial for the students for their projects and at the same time for the people who likes to learn and create their sample web application and see the power of this suit. Here are what this tutorial cover in 2 part sessions.

As I promised that I will be putting the questions asked by the customer on DB2, so here are the some.

1. One of the questions was on GTT(Global Temporary Table). As documented, GTT are at the session level and will be flushed out once the session is closed. The question was, can we have 2 GTT with the same name in 2 different stored procedure. I think its possible but it seems it may create a conflict when we try to call both the stored procedure using same connection. As GTT are at the session/connection level, the GTT created in second stored procedure may conflict with the existing GTT created in the first. I wonder if the GTT are flushed out as soon as we come out of the stored procedure execution. I still need to play and find out the correct answer. Your feedbacks are welcome.

2. Second question was on stored procedure. As we know execution permission on the package are enough to call a stored procedure, now if one of the stored procedure is calling another inside it, do we need to grant explicit execute permission on the internal stored procedure or giving the permission on the external stored procedure will implicitly grant the permission on internal too. The question here was, in there scenario they have a lot of nesting of stored procedures and giving explicitly permission on each of the nested procedures is really cumbersome.

3. Oracle gives a flexibility to provide external hints to the SQL for optimization. According to them, these hints are really useful for them as they can force the query to use some indexes. They have the question that do we have something similar. Yes we do have but we never encourage to use it as DB2 optimizer is very much intelligent enough to decide on the indexes to be used and providing these hints may force optimizer to use user provided hints and may degrade the performance.

4. Do we have compiler directives ? I am not sure what they mean by this. There might be something in oracle.

The last 4 days was very happening days. Last Friday, I have started from Bangalore to Delhi to attend the marriage of my friend in Chandigarh which is 250 KM from Delhi. My flight got delayed by 2 hours followed a 2 and half hours in queue for security check in india's silicon valleys airport which delayed the flight for another hour. The airport is still the same as it was 5 years back but the city changed a lot. The speedy growth of the city exponentially increased the population of air traveler's and hence a lot more flights are coming to Bangalore. But small size of the airport is not able to take the load now. The queue was so long that the hall is full and the queue is started coming out of the airport, hopefully the things will improve when a new airport will open march next year. Later on saturday we met an accident when we are going to chandigarh by road. After seeing the car's condition, everybody should have said we are really lucky. Then 2 days of enjoy followed by the work.

After these eventful days, I participated in the DB2 Training to a customer which was again eventful in its own way. A lot of questions and a lot of discussions. I will put these questions in my next blog entry.[Read More]

Finally, I am able to successfully install the IBM Mashup starter kit with the help of Lauren. I come to know about this kit from Sreekanth blog entry "Get it done quickly - Mashup". In this entry he mentioned that this kit will help creating a dashboard with the data from various sources with zero coding,So This looks interesting and I thought of giving it a try. After 2-3 days of struggle and with the help of Stephen and Lauren, I was able to successfully install both QEDWiki and mashup hub which are the part of this kit. I still need to spend some time to play and do some experiment with it. README provided with this kit says that you need to use Express-C version of DB2 and zend core version 2.0.4 or later. But it works fine even with DB2 V9 ESE but not with previous version of zend core.

I will write about my experience using this kit soon in an another blog entry. Till then enjoy reading.[Read More]

As not all the certification guides are available for DB2 Version 9, It looks a little difficult to go for the certification for V9. Susan Visser mention in one of her blog entry to read the book titled "IBM DB2 9 New Features" along with the corresponding V8 certification guide to prepare and pass the V9 certification. I wondered if this book is available in India and asked the same question to Susan and got a pleasing answer, the book is available through tata McGraw-Hill publication. When I visited their site, I found they need a manual order and I was not sure how many days it will take to get it delivered to me. Then after a google search, I found that you can order the book online though a cb-india portal, and they are delivering the book within 3 days from the order. They are giving 20% discount too. I just ordered the book as I need it for preparing for the DBA certification. Here is the portal link

www.cb-india.com

Here are the contents for this book, looks really interesting. It seems V9 features are covered thoroughly. Susan mentioned in her blog"One of the major selling points of this book, besides the fact that it lists all the newest features is it's coverage of XML. There is no other book on the market that has such thorough coverage of XML on DB2 9."

So it will be nice reading. I am waiting for this book to reach me soon.

While working from last 3.5 years under different mangers, I always use to think, on which platform I compare my managers. How I can say which is the best and which is not so good comparatively(might be individual's personal choice but still it will be good if their will be some criteria). I always wonder what to speak when a manager ask me to give feedback. Today I read an article which gives me some points to consider while preparing for such a feedback session. Thanks to my team mate Jayasimha who sent the link for this article. Have a look

Isolation levels are generally associated with DBA activity. But sometime an application developer too want to change it, may be for particular type of queries. So ever wonder how you can update the isolation within level your java application. Do it with the method setTransactionIsolation of connection object for example

con.setTransactionIsolation(TRASACTION_SERIALIZABLE).

where TRANSACTION_SERIALIZABLE is a JDBC constant. Below is the table for constant correspond to each isolation level

Earlier today, I was preparing the presentation on DB2 pureXML. At the end of the presentation, I wanted to put some references and came across these DB2 Games. A nice idea to teach DB2 to the people who are new to DB2. I like the detective game more interesting then business one. It is easier but interesting too. I think it will be very useful to teach people DB2 and SQL/XQuery. Here is the link to download these games.

XML, XML and XML... most of the data in this world can be represented in XML format. But sometime it is not baneficial to do that specially when their is tight coupling with the data and data is really obeying strict rules. But at the same time, when data need to be very flexible and you like to store only those things which has some logical values, XML is the way. It will save your memory if there are a lot of propeties/column which as null value. In RDBMS, NULL is a logical identity while in XML world, there is nothing like NULL, its just either value exist or not. While in RDBMS, metadata is stored only once (one column name), in XML metadata will be repeated for every record. Then what the benefit ? There is no need to stored metadata at a different location which means no catalog tables. but yes there is schemas which will do the job.While RDBMS cant work without catalog tables, XML can work without schemas.

Now coming to how XML fits within RDBMS ?RDBMS stored the data in a table format. Each column has a strict data typing. So to store XML, either we need to split into the parts which has strong datatypes or assume that full XML, a binary or character streams. In first case, we need to create metadata first and then split the data and in second, we are completely ignoring the metadata associated with the XML document. We need a way where we save the metadata and at the same time there is no need to split the metadata from the XML to retain its flexibility. Which means storing the XML in such a way that it preserve its structure and at the same time fit into RDBMS model. And here comes the innovative solution of pureXML.

DB2 pureXML is a technology which work with both relational and XML data under one umbrella. It can query both XML and relational data at the same time using SQL/XML language. the benefit is that applications now can concentrate on business logic rather then worrying how to handle XML data coming in their way. DB2 will take care of your XML data from storing it in memory to query it, transform it, represent it. DB2 treat XML as just another data type in his repository and provide you the functions to work with this datatype.

XQuery is the language to query XML data and for this new XML data type, DB2 provide you the flexibility to query this data type using XQuery only. At the same time, it integrate XQuery and SQL together using SQL/XML. So you can query a table to select the relational columns where some condition in XML has met. So far So good.[Read More]

Today I got my developerworks blog working and I am quite excited to talk on the topics I am working from so long and do like to share with others. To start with, I like to share my first RED BOOK and first article. The RED BOOK titled "DB2 Express-C: The Developer Handbook for XML, PHP, C/C++, Java, and .NET" talk about the various aspects of application development with DB2 Express-C version 9. It has dedicated chapter for major languages supported by DB2 and a chapter on XML. You can download the book from http://www.redbooks.ibm.com/redpieces/abstracts/sg247301.html?Open.

My first article is just got published last week. It talks about the new and enhanced XML features of DB2 V9.5. You can have a look at the article here http://www.ibm.com/developerworks/db2/library/techarticle/dm-0711sardana/index.html[Read More]