I'm happy to announce that Informix JDBC driver 4.10.8.1 is out on Maven Central!

Using a distribution platform such as Maven, it's now easier than ever to download, upgrade, and utilize the Informix JDBC driver in your applications. You can bypass the traditional installer and download site and get the driver you need quickly and efficiently.

Here is a link to the Maven page for the new driver. On the page they have examples for many build systems (Gradle, Maven, SBT, and more) on how to include our driver. You can also directly download the the jar file from the site.

The group we use for Maven is 'com.ibm.informix'. This will be where we push out relevant technologies (like JDBC driver) that make sense to have a home on Maven. Wondering where ifxjdbc.jar and ifxjdbcx.jar files went? We combined the functionality into a single jar file. Now you can just download and manage one file.

Have you heard? IBM Informix 12.10.xC8 is publicly available for download! The most significant update to the product in this release includes on-disk encryption, also known as Encryption At Rest (EAR). The EAR feature allows you to encrypt the data at the lowest level, on the disk. Now you can rest easy, knowing that your data is encrypted and secure in the event of a physical disk loss or theft. In a world where low cost computers are physically accessible, increasingly used for temporary storage, aggregation, and analytics of data, it is vital to keep the data secure. You can start encrypting your storage spaces by upgrading to 12.10.xC8.

Also check out the newly redesigned landing page for IBM Informix! All the IBM Informix resources you need are in one place - product offerings, select customer references, videos, demos, white papers, code samples, and downloads.

I wanted to highlight some exciting news that was announced this past October: IBM Informix on Cloud.

IBM Informix on Cloud is a hosted cloud offering where you get a pre-configured Advanced Enterprise Edition Informix server on a SoftLayer machine. After the instance is deployed and configured, you get complete control over the instance and the system. This hosted offering provides you the full features of an on-premise Informix instance without the cost, complexity, and risk of managing your own infrastructure.

Today I'd like to highlight one of the lesser known features, a hidden gem if you will, in the Informix wire listener: support for transactions.

Transactions in Mongo and REST?

If you are coming from a traditional relational database background, transactions are a very familiar and truly essential feature. But take a moment to consider two of the application paradigms that the listener enables: MongoDB and REST.

Let's start with the Mongo listener type which enables applications written using any of the vast array of MongoDB drivers to run against Informix. MongoDB, and many other NoSQL databases, do not have any notion of transactions. They do not have the ACID (Atomicity, Consistency, Isolation, Durability) properties of relational databases; instead they use an "eventually consistent" model. But there are many application scenarios where transactionality is critical. In the Mongo world, if you need to use transactions, you as the application developer would have to implement them yourself by implementing a two-phased commit inside of your application. By contrast, in the Informix wire listener, transaction support is built in and is readily available to all Mongo clients.

Second, let's consider the REST listener type. REST by definition is stateless. Therefore the notion of bundling up multiple requests into a single transaction does not traditionally fit into this model either. But the Informix wire listener does enable ways to make this happen even for REST clients should your application require it.

Ok, now that we've covered why this transaction support is a unique value add, let's dive into the semantics.

Listener support for transactions

The listener offers two different ways to execute a transaction: transaction mode and batched transactions. Both of these methods of running transactions are available to both Mongo and REST clients. We'll explore each of these in turn.

Option 1: Transaction mode

In this method, you first issue a command to put the listener in transaction mode. Then execute as many inserts, updates, or deletes as you would like using the normal syntax. None of these statements will be committed until you issue a commit command. There is also a rollback command should you need to roll back the transaction. You remain in transaction mode until you explicitly exit it. Therefore any inserts, updates, or delete run after the first commit command will be part of the next transaction and will remain uncommitted until you issue another commit. When you are done using transactions and want the listener to go back to the default behavior of auto committing each statement, you issue a final command to exit transaction mode.

Transaction mode for Mongo clients

Here's an example using the Mongo shell syntax. We'll take the classic transaction example of transferring funds from one account to another. Here's we'll transfer $1000 from a certain customer's checking account to their savings account.

The first command, db.runCommand({transaction:"enable"}) , puts your session into transaction mode. The following two statements update the accounts table. Keep in mind that accounts could be a regular relational table or a JSON collection. It doesn't really matter; the syntax from the application's perspective is the same for both. In the Mongo update statement, we pass two documents. The first is the condition that determines which rows or documents get updated; the second contains the update operation itself. So, for example, db.accounts.update({customer_num:191187, type:"checking"}, {$inc:{balance:-1000}}) means update the account of type "checking" for customer number 191187 by incrementing the "balance" field by -1000 (so really a decrement operation). After the updates, we commit the transaction with db.runCommand({transaction:"commit"}). Finally, we disable (exit) transaction mode with db.runCommand({transaction:"disable"}).

It is not required to disable transaction mode right away. If you have other transactions to run, you can continue right along after the commit and the next statements will be part a second transaction. The db.runCommand({transaction:"disable"}) is critical though because until you run that, all statements are uncommitted until an explicit db.runCommand({transaction:"commit"}). So don't forget the last part if you want to go back to the default auto commit behavior!

There are two more options in the transaction command that are worth nothing.

To rollback a transaction, run

> db.runCommand({transaction:"rollback"})
{ "ok" : 1 }

To check if are are currently in transaction mode or not, you can use the "status" option. The "enabled" field in the response shows whether transactions have been enabled on the session (i.e. whether you are in transaction mode) and the "supported" field indicates whether the current database supports transactions.

All of the commands just described in the Mongo section can be run through REST. Below I will present the exact same transfer of funds example, but a couple of things first:

For transactions to work with REST, you have to be using cookies! When you authenticate to the listener for the first time, the listener will include a Set-Cookie directive in the response which will include an identifier for your session. This needs to be set in the cookie of every subsequent REST request that is part of the transaction. Cookies are always a good practice when using the REST listener to avoid getting a new session each time, but they are absolutely required if you want to use transaction mode in REST.

A few reminders on REST syntax.

A GET request runs a query. A GET request on the $cmd psuedo collection runs a command, which is what you use to run the transaction commands.

PUT on a table or collection does an update. The update condition is passed in the query parameter in the URL; the update operation is passed in the data portion of your request.

If you need more of a refresher on REST syntax, check out this page of the IBM Informix Knowledge Center

Here's the same example as above, just using REST syntax:

GET /mydb/$cmd?query={transaction:"enable"}

PUT /mydb/accounts?query={customer_num:191187, type:"checking"}

Data: {$inc:{balance:-1000}}

PUT /mydb/accounts?query={customer_num:191187, type:"saving"}

Data: {$inc:{balance:1000}})

GET /mydb/$cmd?query={transaction:"commit"}

GET /mydb/$cmd?query={transaction:"disable"}

Option 2: Batched transaction

The second way of doing transactions in the listener is to send a set of statements as a single command and instruct the listener to run them as a single transaction. This is done using the "execute" option of the listener's transaction command. When using this method, the listener will handle doing the start transaction and commit (or rollback) statements for you under the covers. No need for you to run transaction enable, commit, or disable.

This support was added starting in 12.10.xC7.

Batched transaction with Mongo clients

The syntax for running a batch of commands as a single transaction is as follows.

You just specify the transaction command with the execute option ({"transaction" : "execute"}) and then in the "commands" field pass any array of documents that represent your insert, update, or delete statements. The syntax for these individual commands is just as if you were to run insert, update, or delete as a command in the Mongo syntax.

If all statements in your batched transaction succeed, the listener will automatically commit it.The result returned will include an array that indicates the number of documents updated (or inserted or deleted).

That's just the exact same document as above passed in a GET request on the $cmd psuedo collection.

Batched transaction with a finally clause

There's one more thing! We added to the listener's batched transaction command the ability to add a finally clause. The finally clause is just a set of statements that will always be run at the end of the transaction no matter if the transaction gets committed or rolled back.

This actually allows you to do things outside the traditional notion of transactions. One of the things about the listener is that it tries to efficiently use resources across sessions, by using a connection pool for example. This means that in normal auto commit mode, you are not guaranteed to run your statements on the exact same JDBC connection each time, which is perfectly acceptable when each statement is auto committed independently. Transaction mode, on the other hand, will bind a single connection to your session until you exit transaction mode. This, combined with the SQL passthrough feature, allows you to use transactions to do any sequence of statements that need to be run on the same JDBC connection.

For example if you need to set an environment variable before running a query (and you don't want to set this environment variable globally for all connections created by the listener). The best way to do this is to execute it through the listener's batched transaction command to ensure that your query is run on the same JDBC connection that your environment variable was set on. In this use case, the finally block is key because you want to make sure the environment variable gets unset before that connection goes back to the connection pool, and you need to make sure this unsetting of the environment variable happens whether or not the rest of your transaction succeeds.

This example also shows how you can run queries (the find command) within the batched transaction syntax. This particular example is running SQL passthrough queries, but regular queries on tables and collections work as well. The query results are returned as part of the response to {"transaction" : "execute"} command.

So that's it! Transaction support as you've come to know it in relational databases such as IBM Informix now available to applications using REST or Mongo client drivers!

To find the documentation on the listener transaction command check out this page in the IBM Informix Knowledge Center.

Hopefully, you've already heard about the Informix wire listener. But did you know that there are actually three different types of listeners? This means that the listener provides application developers with three completely different paradigms for connecting their applications to the Informix database server.

Suppose you are rapidly developing a new application and crave the flexibility and power that the schemaless "NoSQL" data model will provide you to adapt to use cases going forward that you cannot necessarily foresee at the current moment. Or maybe you want a quick and easy way to get data into and out of Informix for a web application developed with the newest and hottest web framework, and you want to do so in a way that doesn't make you think or worry about whether the underlying data is in a relational, time series, or JSON schema. Or maybe you are even developing an Internet of Things applications for a wide array of sensors that will be gathering data that needs to be pushed to your database in a way that is simple, light weight, and asynchronous. The Informix wire listener can help you build such solutions, and many more! And it allows you to build such applications all the while harnessing the performance, scalability, and reliability in the enterprise-class, embeddable database that is Informix.

So what are these three listener types?

The Mongo listener

The Mongo listener was the first and original listener type and allowed applications developed for MongoDB to run against Informix without modification. This enabled an entirely different type of application -- flexible, schemaless, and in line with a rapid application development paradigm -- to be run against the Informix database server. It also enabled a whole new ecosystem of modern and open source drivers, those developed for the MongoDB protocol, to be used to connect applications to Informix.

But the original Mongo listener type was about more than just Mongo compatibility. It enabled a new "hybrid" application model -- one that combined traditional relational data with the new NoSQL, JSON-based data. The listener was developed to seamlessly work with any type of data that Informix supports, be it relational, JSON, or time series.

Want to learn more about using Mongo drivers to connect to Informix? Check out a great tutorial written by Martin Fuerderer and hosted on the Informix and NoSQL blog. This tutorial takes you step by step through understanding the listener, NoSQL data, and how to use Mongo syntax and tools to connect to Informix.

The REST listener

REST. So ubiquitous in the world of web applications. This was the next evolution of the Informix wire listener. The listener's REST interface allows any application to get data into and out of Informix using HTTP. A simple interface that opened up Informix connectivity to just about every modern programming language without the need for an Informix-specific driver.

Want to learn about using REST to communicate with Informix? Head on over to Martin Fuerderer's article First Steps with the REST API and then check out the REST API syntax page in the Informix Knowledge Center to get more details and examples of the REST syntax for accessing and querying data.

The MQTT listener

New to 12.10.xC7, the MQTT listener type enables you to use the MQTT publish model to insert data into the Informix database server. This is perfect for Internet of Things scenarios where data collected on the "edge" needs to be published and stored in a central data store in a light weight and asynchronous way.

And last but not least, it is worth reiterating that all three listener types work seamlessly on relational, JSON, timeseries data, or even a combination. No matter the underlying way in which the data is stored, the Informix wire listener will manage those details automatically for you, allowing you as the application developer to focus on your application instead of the nuances of data access.

Want to get started, but need more info on how to start different or multiple listener types? You can find that here in the IBM Informix Knowledge Center.

First, apologies to those who have been following this blog regularly in the past! Unfortunately, this blog had remained dormant for far too long. Make no mistake, it was just the blog that had been dormant, and neither the product nor the experts behind it. Our team has been busy enhancing our beloved IBM Informix product as well as producing content for the user community. As evidence, you will see a series of posts erupting here that point to various blogs, presentations, sample applications, and tutorials available in the public domain.

It is time to revive this blog and provide an avenue for our experts to blanket the user community with Informix wisdom!

Informix has four protocols, or communication channels, which you can use to access the server, SQLI, DRDA, JSON, and RESTful. If you are familiar with one of these and want to know how to perform the same basic operations using one of the others, consider this post to be your Rosetta stone.

The sample application is written in Java. It iterates through using the Informix JDBC driver (which uses the SQLI protocol), the DB2 JCC driver (which uses DRDA), the MongoDB driver for Java (JSON), and native Java HTTP classes (REST). For each of these protocols, it goes through the steps to:

Create a user

Create a database

Create a table/collection

Create a row or document (entry)

Read an entry

Update an entry

Delete entries

Drop a table/collection

Drop a database

Drop a user

The code should be self-explanatory, but there are some differences between the APIs worth noting.

Using the SQLI and DRDA protocols, connections are stateful. There is an open channel maintained between the port listening for commands on the server and the port on the client sending commands. So, transaction states are easily maintained, and multiple commands can be executed without re-authenticating. Most operations are identical between these two, except the URLs are different. Informix JDBC requires an INFORMIXSERVER setting. You can not drop the database you are currently accessing. Authentication takes place at the server level, allowing you to access whatever database for which you have appropriate privileges.

The MongoDB clients are what I'll call semi-stateful. Client and db objects have the appearance of maintaining a connection with the database server, but really the connection information is cached on the client side and as far as the listener is concerned every operation is independent. (Exceptions to this model are beyond the scope of this write-up.) The database you are dropping must be the current database. Authentication takes place at the database level.

Fitting the REST model, there is no concept of a stateful connection in the RESTful API. However, there are two ways to authenticate: Providing a user name and password as is common, and sending a session id cookie from a previously authenticated operation. Using a session cookie is how you can, say, authenticate against the admin database, and use your userAdminAnyDatabase privilege to create a new user within another database.

Users with the ability to create other users must already exist for this demo to work. Examples:

The download file contains the source code and Eclipse project that can be used to build it. It also contains all the required drivers; it is your choice whether to use the ones included or use more modern ones you have installed. Included in the project is an ANT script for building the jar.

Run the demo with the command: java -jar basicChecks.jar connSettings.properties

Running the demo without the required properties file will give you instructions for what the file should contain.

BIGINT Time

I wasn't happy with the overall efficiency of the built-in time types; so, I created a couple of user-defined types which represent time using distinct types of BIGINT. In these UDTs, a TIMEPOINT is the number of microseconds since the UNIX epoch, January 1, 1970 00:00:00 GMT, and a MICROSECOND is simply a duration of time. The internal implementation of a type affects three factors of database operations: storage space, performance, and application development. This is an implementation of time types that has great potential to improve upon Informix's internal implementation in all three aspects. However, the potential does not always live up to theory in practice. Let's look at when it does, when it does not, and why.

Storage Space

The storage space requirements for DATETIME and INTERVAL types can be calculated using this formula, in bytes:

(total number of digits for all fields)/2 + 1

For example, a DATETIME YEAR TO FRACTION(5) requires (4+2+2+2+2+2+5)/2 + 1 = 11 bytes; you have to round up. A 64-bit encoding always requires 8 bytes, never more and never less. This matters more or less according to your table definition. Examples:

The table cust_calls2 is derived from the stores7 database table cust_calls.

create table cust_calls2

(

customer_num integer,

call_dtime datetime year to minute,

call_interval interval minute(4) to second,

user_id char(18) default user,

call_code char(1),

call_descr varchar(240),

res_dtime datetime year to minute,

res_descr varchar(240)

);

The table cust_calls3 replaces the server native types with their UDT counterparts.

create table cust_calls3

(

customer_num integer,

call_tp timepoint,

call_mu microsecond,

user_id char(18) default user,

call_code char(1),

call_descr varchar(240),

res_dtime datetime year to minute,

res_descr varchar(240)

);

So, if you load these tables with a lot of rows, how much space do you save?

Let's run the following:

select

tabname,

rowsize,

nrows::bigint as nRows,

pagesize,

npused::bigint as nPages,

(npused * pagesize / (1024*1024))::decimal(5) as megabytes

from systables where tabname like 'cust_calls%';

Results:

tabname cust_calls2

rowsize 523

nrows 3670016

pagesize 2048

npages 1223339

megabytes 2389.3

tabname cust_calls3

rowsize 528

nrows 3670016

pagesize 2048

npages 1223339

megabytes 2389.3

Under these conditions, there is no space saving at all. Why is this?

The page size is 2048 and the row sizes are 523 and 528. That means that you can only get 3 rows per page regardless of which table definition you use. Also, the 64-bit implementation has a slightly larger row size because the DATETIME and INTERVAL definitions do not cover the full range of their type capabilities. You will not see a savings on space unless you can increase the rows to page ratio.

So, what happens when you have smaller rows and higher precision DATETIME/INTERVAL definitions? Let's take an extreme case. In this test, the table definitions only have time types in them.

create table cust_calls2a

(

call_dtime datetime year to fraction(5),

call_interval interval day(9) to fraction(5)

);

and

create table cust_calls3a

(

call_tp timepoint,

call_mu microsecond

);

This time the table statistics are:

tabname cust_calls2a

rowsize 23

nrows 3670016

pagesize 2048

npages 49595

megabytes 96.865

and

tabname cust_calls3a

rowsize 16

nrows 3670016

pagesize 2048

npages 36337

megabytes 70.971

In this condition, we are looking at a space savings over 26%. Your database probably has a mix of conditions; so, your overall space savings probably lies somewhere between 0 and 26 percent.

Performance

Using the same tables as above, let's exercise the server with some queries. In each case the maximum value for the time column is selected.

select max(call_tp) -- or call_dtime

from cust_calls3;

Since there are no indices, this forces the server to perform N-1 comparisons, where N is the number of rows. Just to be clear, these tests were run on an older laptop and the server is in no way optimized. The relative performance across conditions should be reasonably robust, but your mileage may vary. Especially, if you are using a solid state drive, the IO time will be a lot less and so the computational time will be proportionately more; this should dramatically change the relative performance percentages reported here.

The time to find the maximum value for call_dtime was approximately 9 percent faster than doing the same for call_tp. In practice, the operations performed by the server are faster for this structure,

typedef struct dtime {short dt_qual;dec_t dt_dec;} dtime_t;

than they are for an integer base type. How could this be? Informix math libraries convert everything to a decimal prior to doing the math; so, the TIMEPOINT (BIGINT) types might be being converted to decimal prior to doing the comparison. The conversion time might be adding to the overall processing time. I played with some queries, and found that there was enough difference between

select count(i::decimal(32))

from x1m

and

select count(i)

from x1m

to make this hypothesis plausible. It is not confirmed, but it is plausible. In this test, i is a column in a view of sysmaster:sysdual with 1 meg rows.

It is time to take a look at the abridged versions of the tables. Here we have a complete reversal; the query on max(call_tp) was 24 percent faster than the query on max(call_dtime), on average. I suspect this is because there is a greater information density on the disk, 26 percent denser, for my UDTs and so the IO time is reduced, and that more than makes up for the increased CPU processing overhead.

The summary is that for tables with a low proportion of time values, using TIME_POINT and MILLISECOND does not gain you much in disk space savings, and can actually cost you a little in CPU cycles. However, for tables with a high proportion of high precision time values, using these UDTs can save you in the neighborhood of 26 percent on disk space and around 24 percent in overall query execution time.

Application Development

I wrote up the concepts for why types like these would be nice to have for application development before implementing them. That material is covered in the write-up, “Timestamps with Time Zone Independence Implemented as 64-bit Integers”. Since these are not built-in types, you cannot use API methods to directly bind them to application time value types. For example, you cannot call ResultSet.getTimestamp() on a TIMEPOINT column. However, using them is pretty simple. You just pass the values through the API as BIGINT (64-bit) values, converting to or from the application language native types. Below are some snippets of Java code that do this.

Summary

In terms of storage space, if the time values are a small portion of your table rows, you will not get much, if any, space savings, but if time values are a large portion of your table rows, you can see up to, in round numbers, 25 percent savings is space requirements. The server side operations on these types are not more efficient than the native types because the server converts integer types to decimal prior to performing any math operations, and the DATETIME and INTERVAL values are already in DECIMAL format. However, you can get around 25 percent better overall query performance because more information can be obtained per disk IO operation. Converting between database types and application types is more efficient in terms of CPU cycles because the conversion only involves integer operations. More significantly, the database values are time zone independent and you do not have to code your applications for awareness of the time zone in which the application was running when the values were stored.

The attached files include the SQL routines to define the MICROSECOND and TIMEPOINT types, routines to convert them to and from the native time types, and simple math operations for addition, subtraction, multiplication, and division. Included is a write-up containing the theoretical reasoning for implementing these types. There are also a variety of SQL test scripts and a Java program demonstrating read and write operations on these types in an application.

The download code is more of a proof of concept than a fully-fledged, for-production-use implementation. The conversion routines that exist could be better optimized, and operations beyond simple math could be implemented. Comments and recommendations for improvement are welcome.

The best-fit Java class for an Informix DATETIME value is java.sql.Timestamp. Java.sql.Timestamp technically inherits from java.util.Date; although, there are some semantic differences which are not typical of inheritance relationships. All the constructors, getters, and setters which are not deprecated for Date and Timestamp use values based on Greenwich Mean Time (GMT). They are offsets in milliseconds from the epoch, January 1, 1970 00:00:00 GMT. So, it would be logically consistent for Timestamps to convert to DATETIMEs using the GMT reference frame by default. However, the JDBC Tutorial and Reference specifies that, when no Calendar is provided, Timestamp values are converted to the time zone of the JVM in which the application is running. The difficulties surrounding Timestamps and time zones are not limited to the scope of the Informix JDBC driver; they are generic to all of JDBC because that is the specification.

This model creates problems in a variety of ways. For distributed applications, the Date or Timestamp values might be streaming into the database from JDBC drivers located in different time zones. With multi-tier applications, you might find yourself contending with a user interface in time zone A, the JDBC driver in time zone B, and the database server in time zone C. The driver is the library where external types are converted to internal types; so, where it is running is where the local time zone comes into play. It can be difficult for either the front end or the database to know in what time zone the driver is operating. Dealing with the conversion from Timestamp in GMT to DATETIME in the JDBC driver locale has to be done in the application layer directly interfacing with the JDBC driver.

The conversion done by the driver applies in both directions, and this can lead to complications if anything other than the JDBC Timestamp class is used in conjunction with DATETIME values. If strings representing DATETIMEs or DATETIME literals are used anywhere in your application SQL (including CURRENT), the values will not be converted on the way in, but will be converted on the way out. Similarly, different APIs will not convert in either direction. If you insert new Timestamp(86400001) (January 2, 1970 00:00:00.001 GMT) through JDBC, and select it back from within the same JVM, you will get that same value back. So, you might expect that selecting the same row-column from, say, DBAcccess, would also give you the same value, but it will not because the value was converted to JVM time zone on the way in through JDBC, but not converted on the way out through ESQL/C. If you are in the U.S., you will get some time on January 1st, not even the same day. The reverse also applies, you may find yourself with values inserted through other APIs, which perform no conversion, that are converted to some other time zone when selected through JDBC.

DATETIMEs, of course, do not contain time zone information. There are basically two ways to deal with this, record the time zone information in another column along with the DATETIME or establish a convention that all the DATETIMES in the database are in the same time zone. Establishing a convention is the simpler approach both from an application implementation perspective, and in terms of being able to easily compare values in different rows, columns, or tables. If you know that every Java application which will ever access the data will operate in its own JVM, you can code your applications to set the default time zone of the JVM, but this is a problematic solution when more than one, independent application is running in the same JVM, such as servlets in a web server. It is unclear what would happen with different threads of execution changing the time zone for all other threads, but if they were actually able to do that, it would not be pretty.

At JDK 1.1 two things happened, all the getters and setters for java.util.Date where the meaning was ambiguous with respect to time zone were deprecated and the java.util.Calendar class was introduced. Along with the Calendar class itself, methods accepting a Calendar object were added to the JDBC interface, for example PreparedStatement.setTimestamp(int parameterIndex, Timestamp x, Calendar cal) andResultSet.getTimestamp(int columnIndex, Calendar cal). In these methods, the Calendar object establishes the frame of reference used to convert from the GMT offset in the Timestamp object to or from a DATETIME value. If you create a constant Calendar object in your applications, and use it every time you read or write a Timestamp object to a DATETIME, the values DATETIME will remain consistent with each other, and the meaning of the value will not change dependent on the time zone of the JVM.

Using a GMT calendar would be efficient because it requires less operations to convert between the Java classes and the value in the database. Also, the value will not change if other APIs, like ODBC or ESQL/C, are used. Conceptually, what is stored in the database is January 1, 1970 00:00:00.000 GMT + tstamp.getTime() milliseconds.

Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));

Coordinated Universal Time (UTC) is a close relative to GMT and is just as viable.

In this model, all date and time values in the database are on the same time line, and conversion to other time lines or calendar systems are handled at the user interface level. Using UTC or GMT for all DATETIME values provides a simple model to implement in applications. It also removes all the problems related to knowing the time zone of the JDBC driver runtime or if applications written in anything other than Java will ever be used to access the data.

Starting with version 12.10.xC3 Informix can automatically tune the logical and physical log as needed.

You can set the AUTO_LLOG configuration parameter to enable Informix to dynamically add logical logs when lack of logical logs causes frequent checkpoints, blocking checkpoints, long checkpoints etc. In addition, with AUTO_LLOG configuration parameter you can specify a dbspace in which to create new logical log files and the size of all logical log files at which the server stops adding logs for performance.

You are most probably thinking what about the the DYNAMIC_LOGS configuration parameters that exists since Informix version 10. The AUTO_LLOG and the DYNAMIC_LOGS configuration parameters works in different situations and do not interact with each other. When the AUTO_LLOG configuration parameter is enabled, logical logs are added to improve checkpoint performance. When the DYNAMIC_LOGS configuration parameter is enabled, logical logs are added under more urgent conditions, such as when a long transaction threatens to block the server. For example, the maximum size that is specified in the AUTO_LLOG configuration parameter does not affect the amount of log space that can be added by the DYNAMIC_LOGS configuration parameter. Similarly, the value of AUTO_LLOG configuration parameter does not affect the amount of log space that you can add manually.

Now, Informix also allows to expand the size of the physical log as needed to improve performance by creating an extendable chunk for the physical log. An extendable chunks can be automatically extend when additional storage space is required.

Typically, physical log get created in the root dbspace, unless you create an Informix instance during installation. However, for optimal performance it is better to create a separate dedicated dbspace for physical log on a different disk from the root dbspace and move the physical log out of root dbspace. In case, an instance created during installation the physical log create in a separate dbspace called plogspace with a default size that depends on the value of the AUTO_TUNE_SERVER_SIZE configuration parameter.

By default, the chunk that you assign to the plogspace is extendable, therefore, the initial size of the chunk can be small. The database server automatically expands the chunk when the physical log requires more space.

The following Informix information centers will be sunset in the near future:

Informix 11.10 information center

Informix 10 information center

Informix products information center

After these information centers are sunset, links to them will be redirected automatically to the resources listed below. You can download final versions of the product documentation, which are being provided as-is, from these resources:

Information centers for current Informix releases will be replaced by the new IBM Knowledge Center, which is currently in Beta. Links from the current information centers will be redirected automatically to the new IBM Knowledge Center.

At present most of the Informix configuration parameters can modify dynamically. Typically, users run 'onmode -wf' or 'onmode -wm' command to dynamically change a specific configuration parameter. But there may be a situation when you need to modify multiple parameters at once. For example, you have ten Informix instance and want to tune some of SQL statement cache configuration parameters on all the Informix instances.

Now, you can import a configuration file using onmode command to change multiple configuration parameter at once. Importing a configuration file is often faster and more convenient than changing one configuration parameter at a time. You can use the 'onmode -wi' command to import a file that contains new values for multiple configuration parameters. If the parameters are dynamically tunable, the database server applies the new values. The import operation ignores the configuration parameters in the file that are not dynamically tunable and if the new parameter values same as the existing value.

However, you need to keep a note hare that an import operation changes the values of configuration parameters only in the memory. It does not modify the values in the ONCONFIG file on disk.

Let's take the same example as before, modify some of SQL statement cache configuration parameters using import operation. Currently, you have following settings:

The above command generates appropriate messages on screen and the message log. Following is an excerpt of the message log:

14:50:42 Importing configuration from '/tmp/onconfig.new':
14:50:42 Value of STMT_CACHE has been changed to 1.
14:50:42 Value of STMT_CACHE_NOLIMIT has been changed to 1.
14:50:42 Value of STMT_CACHE_HITS has been changed to 1.
14:50:42 Config Import Complete. 3 tunable parameters processed.
3 modified, 0 unchanged, 0 could not be tuned.

You can notice only three out of the five parameters has been changed because either those are not dynamically tunable or using same existing value.

As the 'onmode -wi' command import a configuration file, you can also export configuration parameters from memory to a file using 'onmode -we' command.

On certain Operating System (for example: Linux) you may noticed dbaccess (and other client applications) always doing DNS lookup while connecting to a database, evern after found out the hostname or IP address in the local host file. This behavior sometimes caused slow connection, if you have problem related to DNS. Following is an excerpt of strace output shows the sequence of file accessed by a dbaccess request:

Question is why a connection request to database server from dbaccess is trying to access DNS server for hostname resolution, after it found the same in the local host file.

Traditionally, hostname and service name resolution were performed by functions such as gethostbyname(), getservbyname() etc. These traditional lookup functions are still available, however those are not forward compatible to IPv6. Instead, the IPv6 socket API provides new lookup functions that consolidate the functionality of several traditional functions. These new lookup functions are also backward compatible with IPv4, so a programmer can use the same translation algorithm in an application for both the IPv4 and Ipv6. The getaddrinfo() is the new primary lookup function and a connection request from the dbaccess ultimately calls this socket API. You can pass several parameters to the getaddrinfo(), one of those parameter is addrinfo structure. By default, dbaccess passes value “AF_INET6” for addrinfo.ai_family. The ai_family field indicates the protocol family associated with the request, and will be PF_INET6 for IPv6 or PF_INET for IPv4.

If the ai_family set to AF_INET6 (IPv6) the getaddrinfo() will search the DNS everytime. If the ai_family set to AF_INET, then it don't query the DNS server. You can consult the 'man' page for getaddrinfo() for detailed information.

Beginning with Informix 10.00.xC4 and Client SDK 2.90.xC4, the database server checks, on startup, whether IPv6 is supported in the underlying operating system. If IPv6 is supported it is used. If the underlying operating system does not support IPv6, the IPv4 address is used.

In case of a problem with DNS lookup and encountering slow connection to databse, you may use the environment variable IFX_DISABLE_IPV6 (IFX_DISABLE_IPV6=1) to disable Ipv6 and this will set the ai_family to AF_INET only and will not do subsiquent query to the DNS server.