Archive for December, 2013

Every now and then (hopefully not often), there is a need to browse through some logical logs and see what’s been going on with your system. In this post, I’ll cover the purpose and logical log basics and provide some info on how to read what’s inside the logs.

In the simplest words, logical logs provide the way to ensure the minimal loss of data in case the system comes to a halt. Every operation and every transaction that is executed is written in one or more logical logs. So if the system, for some reason, crashes, when it is re-inited, you’ll see in the online log that the logical recovery is performed. That means the system is going through the logical logs since the last checkpoint, and re-executing the operations that were executed till the system halted. Transactions that were started, but not completed, are rolled back.

Knowing what and when is written in logical logs provides the opportunity to explore the systems behavior in some cases. For example, it is the way to identify the busiest tables and indexes during some period of time, or perhaps to explore in detail what’s been going on during a single transaction in order to detect possible errors in end-user application while modifying the data.

The size and the number of logical in your system is determined by onconfig parameters LOGSIZE and LOGFILES, respectively. When all the logs are used (flag “U”, in the following list), the system reuses the first one, and so on, in endless cycle. The logs that are to be reused, should always be backed up (flag “B” in the following list), using onbar or ontape utilities. By setting the onconfig parameter LTAPEDEV to /dev/null you can discard the logical logs, but this should never be done in a production environment.

You can see the state of your logs by executing onstat -l. This is an example of the output:

Each logical log is identified by its uniqid (in the list above, logs have uniqids 1066232 – 1066234, and 1064843 – 1064844). The flag “C” indicates that this is the current log being written. The next log has a smaller uniqid than the current – it is the next one to be used (and overwritten in the current logical space). The log with the “L” flag is the log containing the last checkpoint.

The onlog utility is used to display the content (or part of it) of a logical logs. The main thing you have to know is what logical log(s) you want to see (ie. their uniqids). Apart of that, there are number of filters that could be used in order to reduce the output and make it more readable (as you’ll be able to see, there is lot of information being written in logs). The complete syntax of the onlog utility can be found here.

Filtering could be done by username, partition number (often referred as partnum in Informix) and/or transaction id. Username obviously belongs to the user that executed some statements or commands resulting in data manipulation. Partition number identifies the table or index the operation is executed upon. The easiest way to find out partnums of tables or indices is to query the sysmaster:systabnames table. In some outputs it is shown as decimal, in other as hexadecimal number, so you may need to use the hex() function in your SQL. And finally, transaction id (denoted here as xid) is a number identifying the transaction – all log entries that belong to the same transaction have the same xid (starting with BEGIN entry and ending with COMMIT or ROLLBACK). But be careful about this one – it’s not unique in any way. Often you’ll see the same xid being used again for the next transaction of the same user for some time, and after that a new xid is assigned to this user, and the previous one my be assigned to some other user.

If you’re not backing up logical logs, using the onlog utility you’ll be able to display the contents of those logs currently on disk only, but if you have the logs on backup, you can also fetch and display those. If using ontape utility for logical log backup, backed up logs could be fetched with onlog utility as well. If backup is made with the onbar, then you need to use onbar -P (with the rest of the options same as if you’d use onlog) to fetch logs that are sitting in your backup.

There are 3 entries shown above. There is the addr column, which is the address of the entry (hexadecimal) in the current log. Len is the length of the entry (decimal) in bytes. Next column is type – one of predefined identifiers, specifying the action executed. Xid column is transaction id. Id column mostly has the value 0, but can take some other values if the entry is linked to the previous log. Link is the address of previous entry that belongs to the same transaction. The rest of the data in each row is type-specific. The list of all entry types and corresponding additional data can be found here.

In a regular production environment, seams reasonable to expect most entries to be of these types:

Reading through the documentation for additional data for these types, we can see the partition number always involved, so it’s fairly easy to infer on which table/index is the action executed. But that’s about it. There is no information which data was written/deleted etc. In order to find that out, we need to look at the long output of onlog, obtained with onlog -l. This option is not documented, the documentation states: The listing is not intended for casual use. Nevertheless, there are some things you can find out of it. Here is an example of the onlog -l, for a single entry.

Say we have the table test_table with columns col1 of type INTEGER, and col2 of type CHAR(10), and execute this command:

insert into test_table values(123000123, 'dva');

This will be written in log as 3 entries – BEGIN, HINSERT and COMMIT, provided there is no primary or foreign keys, nor other indexes on that table (remember that, when a single SQL statement is executed, it is implicitly executed as a single operation transaction). Let’s take a look of HINSERT entry:

Each byte is represented by two hexadecimal numbers. The entire log entry is shown both in hexadecimal and also ASCII view, side by side, which makes it easier to identify the text strings. First part of each entry are the header bytes, and the rest is operation-type dependent.

Corresponding header data are marked in the following lists (some numbers in header are decimal). The same list is repeated several times, with different data pairs marked.

Additional data for HINSERT are three numbers: partition number (hexadecimal), rowid (decimal), and row size in bytes (decimal). In this example, those are: 200324, 103, 14. All three are marked in the list below (bold, italic, underline matches):

14 bytes of data in this example refers to one integer (4 bytes) and one char field which is defined as char(10). Remember that, even though we’re inserting three-letter word, the rest of the char field is filled with spaces (ASCII code 20).

When observing update operations, note that both before- and after-image are written to the log, but only changed columns are written, not all. If the amount of data being written is small enough, than single HUPDATE entry is written in log, otherwise, two entries are written: HUPBEF and HUPAFT.

In conclusion, reading the logical logs can be very useful, and sometimes the only way to get some information about how your system works. But there are a lot of things one should be aware of before trying to read the log structure, most important being:

the structure of the tables and indices

how the data are stored (how many bytes are needed for each data type etc.)

Given these important notes, and the fact that the logical log structure is not documented, that means it could be (and very likely is) different not only from one version to another, but also from one platform to another. The examples above are taken at a standalone Solaris SPARC system, Informix 11.70.FC5.

IBM has recently moved (well, copied to be exact) the content of Informix Information Center to Knowledge Center. Not only is it more user-friendly, but also significantly faster, especially for searches. Kudos, documentation team!