Tuesday, October 11, 2016

English version
A few weeks ago I announced the availability of a new script that allows a DBA to see which sessions are consuming temporary space. After release I received some feedback about it's use which triggered some changes and /or thoughts. Specifically users faced these two issues:

The script didn't work in version 9.40After some investigation I noticed I was using a column in one of the sysmaster view which didn't exist in version 9.40. I managed to use a workaround, but after that I've found another issue when I tried to retrieve the page size of a dbspace. I was getting it from a column on the sysdbstab view, but because we only introduced the ability to have chunks of different pagesizes in V10 the column didn't exist in previous versions

The script didn't run when it was most needed: when the temporary dbspaces are fullAt first glance this seemed a very short sight on the way I created the script. In fact I need to create a temporary table and those will be created in the temporary dbspaces. If they're full I may have a problem. But further investigation showed the engine moves those table creation to other dbspaces. The issue seems to be the execution of the script on secondary servers when the temporary dbspaces are full.

To solve issue 1) above I did some changes in the script. It tries to adapt to the version currently being used. The solution for problem 2) is more complex. It's not feasible to do the script without the temporary tables. It could be possible but it would be very difficult. As already explained, the script should run on primary/standard servers because the engine will shift the temporary tables location to the dbspace holding a database or to rootdbs (in the case of the script it will be rootdbs as it's connecting to sysmaster). For secondary servers, this shift is not possible. The only workaround I can think of requires a "trick" and some planning in advance: A DBA can create a very small temporary dbspace (10-50MB is more than enough) without adding it to the list of dbspaces in DBSPACETEMP. And then export DBSPACETEMP=newtinydbspace to run the script. Because we're not adding it to the DBSPACETEMP $ONCONFIG parameter, it will not be used for the session's work and will be reserved for the script usage when needed. It's not a pretty solution, but it's the only I can think about currently.

Meanwhile I received some interesting feedback and I think we can see a solution for the base problem in future versions. We have to wait and see, but I really hope this script will become nearly useless in the future, which as weird as it sounds is actually a very good thing!

Users will have full (root) access and will be responsible for managing a Linux server (CentOS)

Informix advanced Enterprise Edition will be installed and Informix Warehouse Accelerator will be included (not configured)

There will be 4 sizes of server with different resources (CPU, Memory, DISK, Network)

The server will be available through public Interner access. It can be configured by the users to participate in a VPN for example

No managing, monitoring or backing up services are currently offered. This can change in the future, but currently the customer has total freedom to implement whatever best fits their requirements

It works as if a server was rented in the cloud and the customer used their Informix licenses but of course that's not necessary as customers are charged on a monthly base that includes everything, including product support

The lack of backup services is relative issue. IBM and other vendors have several methods for storage use in the cloud. Some solutions could be considered.
Actually something that I personally find interesting is the possibility to run backups to the cloud. Currently Informix already supports it for Amazon S3. But the implementation seems a bit too simplistic. Something I've been trying as an exercise seems more interesting and with a bit more potential: Our backup tool, onbar, interacts with the Storage managers using a standard and open protocol called XBSA. It's possible to create a XBSA library that sends the objects to the cloud. Just a few hours ago I managed to make my first restore from a previous backup sent to the cloud. It took me less than a week of free time to create this. It currently has less than 1000 lines of codes written by me. And yes, it's incomplete, doesn't have proper error handling or debugging, doesn't manage metadata or an object catalog etc. But it clearly shows it could be a path to cloud storage use for database backups. Additionally some cloud services (like IBM's Bluemix Object Storage) use a standard called SWIFT which makes it relatively easy to support several cloud providers if the library can be configured externally. Hopefully in the future I'll be able to write an article dedicated to this proof of concept.

About Me

I'm an IBMer and I've been working with IDS since I joined Informix in 1998.
The ideas and opinions expressed in this blog are personal and in no way represent IBM positions, strategy or opinions.
I chose to write this blog in English so that I could reach the maximum number of Informix users. Take notice that English is not my native language, so there are probably many mistakes.
I appreciate any comments, corrections and topic sugestions.
I can be reached at domusonline at gmail dot com.