IDS Experts

MACH11Multi-node Active Cluster for High Availability(MACH11) is a code name for the new feature called ContinuousAvailability introduced in Informix Dynamic Server (IDS) 11. This new feature significantly expands the highavailability options in IDS to provide increased failover, capacity,flexibility, and scalability.

Traditionally, IDS has provided multiple robust solutions forsupporting high availability data replication options. Previousreleases of IDS have supported two replication technologies: EnterpriseReplication (ER) and High Availability Data Replication (HDR). By usingthese technologies together, customers are able to achieve very highlevels of data availability. Both replication technologiescan be integrated with each other and coexist with other availabilitysolutions such as disk mirroring.

While HDR and ER have been features of IDS for many years, and haveproven to be highly reliable and low-maintenance technologies. IDS nowadds support for two new types of secondary servers:

Secondary servers that share the same physical disk. TheShared Disk Secondary (SDS) servers provide increased availability byallowing one or more instances of the IDS server to attach to the samedisk subsystem, providing redundancy for the server in addition to dataredundancy solutions

Additional remote secondary servers. The Remote StandaloneSecondary (RSS) servers extend HDR to provide multiple local or remotebackup servers that also replicate the data.

Both SDS and RSS servers provide customers a way to obtain increasedcapacity by distributing workload across multiple servers.Customers canchoose any of these solutions on their own. They become even morepowerful by combining all three types of topologies together. AddingEnterprise Replication, a completely customized availability solutioncan be delivered to meet each unique availability requirement.. Theseconfigurations are simple to set up and maintain, and are highlyscalable.

The following image depicts a Continuous Availability solution with 3tier protection. For example, if the Primary in Building-A inNew Orleans went down for some reason the role of the primary caneasily be switched to one of the SDS servers running on the bladeserver in Building-B. This would cause all other secondaryservers to automatically connect to the new primary server. If both the servers in New Orleans died, Memphisbecomes your Primary and Denver can be made HDR secondary and you mayeven add some SDS servers to Memphis blade server for load balance.

To learn more about this feature, please read the following white paperwritten by the architects of this feature:

TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.

You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.Optimizer Directives in ANSI-Compliant Joined Queries

Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:

The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.

Trigger Enhancements

Several new features expand the syntax and the functionality oftriggers on tables and on views:

You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.

When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.

You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].

New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.

These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.

Index Self-Join Query Plans

Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.

The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.

Enhanced Concurrency withCommitted Read Isolation

In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows. This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.

This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.

Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.

This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid. This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:

– BOOLEAN– LVARCHAR– DISTINCT of built-in types that are not opaque– DISTINCT of BOOLEAN– DISTINCT of LVARCHAR– DISTINCT of the DISTINCT types listed above.

These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.

This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.

IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.

Query statistics in EXPLAIN output.

You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN. The querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel. The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Ysessionid command. Following is an example.

New SAMPLING SIZE option in UPDATE STATISTICS MEDIUM mode.

In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.

If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:

* The min value that you specifyimmediately after the SAMPLING SIZE keywords * The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).

If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:

* The min value that you specifyimmediately after the SAMPLING SIZE keywords * The sampling size that is required forthe specified percent of rows and for the specified confidence level.

If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.

For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.

Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.

New FILETO option to redirect SET EXPLAIN output to a filethat you desire.

When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.

The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user. The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file. Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.

SETEXPLAIN FILE TO '/tmp/explain.vijay'

These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.

2> Normal dbspace backup order The onbar –b –w backup on Normal dbsapces doesn’t depend on the order which the dbspace is created but depend on the size is used on that dbsapce. In the above example, the ddbs2’s usage is bigger than the ddbs1’s usage, the onbar backup’s order is the ddbs2 first and then the ddbs1 without following the create order.

er_dynamic_configIn IDS 11 you can dynamically change,add or removethe Enterprise Replication configuration parameters. Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.

Use the cdr change config command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1.

For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7

Before changing the values you can confirm the existing values by doing:

The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfromONCONFIG will take effect. Now lets confirm the new values:

Use the cdr add config command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.

Use the cdr removeconfig command to removethe existingvalue of an Enterprise Replication configuration parameter whilereplication is active. The value persists while replication is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1.

Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.

So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.

The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.

Like any official datablade, it is registered/installed by the use ofthe blademgr function.

After registering it to your database you now have two additional UDT's

binaryvar- UDT which stores binary string of up to 255 characters

binary18- UDT which stores a fixed 18 byte binary string. If the value insertedinto this UDT is less tha 18 bytes, the field is right paddedwith zeroes.

Binary data is entered into the colums by use of ascii hexvalues. You can choose to preface the value by "0x" or not. The datamust also be entered in pairs (since a byte of information is from 00to FF in hex). That means inserting the following will generate anerror: F But inserting the Following will not generate an error: 0F

Currently the above datatypes can be manipulated by the followingfunctions:

bit_and() -performs a logical AND on two BDT datatypes or one BDT datatype and astring constant.

bit_complement() - performs a logical NOT on a single BDT datatype.

bit_or()- performs a logical OR on two BDT datatypes or one BDTdatatype and a string constant.

bit_xor()-performs a logical XOR on two BDT datatypes or one BDT datatype and astring constant.

Also the following aggregates support the BDT datatypes:

COUNT DISTINCT()

DISTINCT()

MAX()

MIN()

Pleas note the following are not supported:

LIKE

MATCHES

The BDT Blade supports b-tree indices for both single column andcomposite indices.

So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.

bts_release: Allows you to skip using blademgr to find out what version of theblade you are using.

Performance Tuning:

The biggest performance tuning tool is bts_index_compact. From anadministrative standpoint, we treat the BTS indices the same way wetreat btreeindices, with one exception, we have no automated thread which goes outandclean the BTS indices, for btree indices we have btscanner threads.bts_index_compact is the BTS version of what the btscannerthread does.So from a performance standpoint, you should frequently runbts_index_compactin order to decrease the size, both disk space and number of elements,and thusincrease performance. Please note that you can make thisprocessautomated by creating your index with delete= 'immediate' in youstatement.This will eliminate the need for compacting the index, but will makealldeletes run longer. A better option would be to set up a function usingtheadmin api to periodically run bts_index_compact on your indices, thuscreatinga poor mans btscanner for bts, so to speak.

Another performance tuning tip is to make sure that you have createdyour extspaces on fast disk. If you create an ext space on an NFS mount you arebasically asking for horrible performance. A little bit about extspaces may bein order. External spaces, or extspaces, are exactly as they sound,they arespaces external to the database proper. What this means is that youcannot usenormal onstat's to monitor an extspace.

Troubleshooting BTS Issues:

So now you have run into a problem, maybe something so simple as notremembering where your extspace is actually located, maybe you aregettingbigger problems, in either case, this is when you need to enable BTStracing.This process is actually fairly straight forward, as BTS has provided 2functions to handle this. First you will call bts_tracefile, to setyour tracefile location, then you should call bts_tracelevel which will dumpinformationto the file you specified.

Let's say that you had just forgotten the syntax and location of yourbts. Thequickest way would be to run the following:

Below is a snippet from the trace file generated from the above query:

Taking a quick look at this you can see what your extspace info is.

database name = stores_demoowner name = informixext space name = bts1

You also know now that your extspace is located in/work1/informix/11.10.FC1/bts_idx/stores_demo/informix

A quick word on extspaces. Extspaces are spaces that are external tothe database;as such there is no onstat that monitors them. If you forget what thename ofyour ext space is or forget its location, the above method isessentially youronly option by which to find them.One other important note about extspaces, if you want to back them up, there is only one support method, and is as follows:

So you want to get the BTS datablade running on a database? Well hereis a step by step Quick Reference to get your database ready to use theBTS Datablade.

First you need to register the BTS blade in your database,this is done with the following method:

Next you need to create an external space for your BTSindex. This is accomplished by the following method:

Next you need to set up your instance so that the BTSVirtualProcessor is created. This is done by adding the following to the$ONCONFIG file:

VPCLASSbts,noyield,num=1

Please note that at this point you have two options to start your btsVP, the first is to cycle the instance (ie, onmode -ky folllowed byoninit), the second is via the onmode -p command (ie. onmode -p +1 bts)

Finally you need to create an index using the bts accessmethod. Below is an example of the code.

And now just to validate that the BTS blade is now up an running, let'srun quick test. We'll run the following query:

SELECT company FROM customer WHEREbts_contains(company, '%all');

And so now you should have the BTS set up and ready to run.

In our final Blog on the BTS we will go over some performance tuning,and how to troubleshoot some issues.

In IDS 11.10, you can configure session properties using sysdbopen() and sysdbclose() procedures, see Configuring Session Properties. Theseprocedures are stored in sysprocedures system catalog. A memory cache is maintained for these procedures to facilitatefaster searching of procedure identifiers. A new onstat option is available to print statistical information about thismemory cache. You can determine how many users are accessing a database, the sysdb procedures defined for public andother users from this information. A sample output is shown:

Procedure ids of public.sysdbopen() and public.sysdbclose(). The value will be (-1,0) if there is no public or user defined sysdb procedures for that database

Username

User name of any users who have their own sysdbopen() and sysdbclose() procedures for that database

User procids

Procedures ids of sysdb procedures for a particular user in a database

UsrList#

A user hash list to which the user belongs

The number of lists is determined by the onconfig parameter DS_HASHSIZE. The maximum number of database entries for a list is determined by DS_POOLSIZE. This is a soft limit as the limit is checked only when you close a database. When you close a database, if the number of database entries for that list is greater than DS_POOLSIZE, IDS deletes all excess database entries in that list that has a reference count of 0.

The installation and uninstallation applications allow to selectively add or remove components. To choose specific features during installation, you must choose a custom setup option during installation .

You can remove specific Dynamic Server features after both typical and custom installations, provided that the features are not required for your system's integrity and that they have not been used in your implementation.

You can also add or reinstall specific features after completing setup of a Dynamic Server instance without reinstalling the base server. The installation and uninstallation applications detect your implementation's setup to prompt you about the features accordingly.

UNIX and Linux-platform installation=========================The manifest file manifest.inf ($INFORMIXDIR/etc/manifest.inf) and installed files IIFfiles.installed($INFORMIXDIR/etc/IIFfiles.installed ) on sites using J/Foundation, and at IDS2000files($INFORMIXDIR/etc/IDS2000files) for sites not using J/Foundation are dynamic files that log installation activity of an IDS instance. If any on the components is uninstalled the entry for that component is removed/deleted from the manifest.inf file .These "log files" can help you quickly see what features and component files are currently installed, as well as a history of such activity. Do not modify the content of these files.

WINDOWS-platform installation======================The manifest file manifest.inf (%INFORMIXDIR%\etc\manifest.inf) is a dynamic file that logs installation activity of an IDS instance. This "log file" can help you quickly see what features and component files are currently installed, as well as a history of such activity.

Do not modify the content of these files.

Below are the snippets from the manifest.inf file =========================================

In IDS 11.10, a DBA or user informix can create sysdbopen() and sysdbclose() procedures to be executed when adatabase is opened and closed. These procedures can be used to change the properties of a session without changingthe application that the session executes. Any statements that are valid in a UDR can be executed in these proceduresto change the session behavior.

A DBA or user informix can create the following procedures in a database:

username.sysdbopen

public.sysdbopen

username.sysdbclose

public.sysdbclose

where

username: Operating System User

Each time a user user1 opens a database using either a DATABASE or CONNECT TO statement, the database server executesuser1.sysdbopen() if such a procedure is defined. If not it will execute public.sysdbopen(). Each time a user user1 closes a database using either a CLOSE DATABASE or DISCONNECT statement, the database server executesuser1.sysdbclose() if such a procedures is defined. If not, it will execute public.sysdbclose(). sysdbclose() will beexecuted even if the application exits without an explicit CLOSE DATABASE or DISCONNECT statement because the serverdoes an implicit close of the current database for such cases.

The owner name is not ignored when you create sysdbopen() and sysdbclose() procedures in non-ANSI databases, so youcan create these procedures for specific users in non-ANSI databases.

The following procedure creates a table oltp_stat, sets the role to oltp and PDQ priority to 10 for user oltp_user ina database:

A DBA or user informix can set the environment variable IFX_NODBPROC to any value, including 0, to prevent the execution of sysdbopen() and sysdbclose() procedures. When you set up sysdbopen() and sysdbclose() procedures, youcan set the environment variable IFX_NODBPROC and execute the procedures to test if the procedures work as expected. You need to unset the environment variable IFX_NODBPROC after testing.

For more information, see information on sysdbopen() and sysdbclose() in the IBM Informix Guide to SQL: Syntax

Previously there was no easy way to monitor onbar archiving progress. It always a question, how long onbar process will take to complete an archive or how much time onbar will spend to transfer data between server, storage manager and vice versa.

Informix introduce two new configuration parameters to help onbar monitoring.

BAR_PROGRESS_FREQ

BAR_PERFORMANCE

BAR_PROGRESS_FREQThe BAR_PROGRESS_FREQ configuration parameter specifies, in minutes, the frequency of the progress messages in the bar activity log for backup and restore operations.

For example, if BAR_PROGRESS_FREQ is set to 5, onbar reports the percentage of the object backed up or restored every five minutes. Following is an excerpt of bar activity log that showing progress of rootdbs dbspace backup:

The default value of BAR_PROGRESS_FREQ is 0. If the value set to 0, onbar does not write any progress messages to the bar activity log.

The BAR_PROGRESS_FREQ value can’t less than five minute for monitoring onbar progress.

If ON–Bar cannot determine the size of the backup or restore object, it reports the number of transfer buffers sent to the database server instead of the percentage of the object backed up or restored.

BAR_PERFORMANCE

The BAR_PERFORMANCE configuration parameter specifies the type of performance statistics to report, and write them to the bar activity log for backup and restore operations.

For example, if BAR_PERFORMANCE is set to 3, onbar reports the time spent transferring data between the Informix server and the storage manager, in the bar activity log.

The default value of BAR_PERFORMANCE is 0. If the value set to 0, onbar does not report any performance statistics to the bar activity log.

Valid values of BAR_PERFORMANCE are 0,1,2 or 3.

0 - turn performance monitoring off

1 - display the time spent transferring data between the server and storage manager

2 - display sub-second accuracy in the timestamps

3 - display both timestamps and transfer statistics

Both BAR_PROGRESS_FREQ and BAR_PERFORMANCE configuration parameters take effect while onbar process starts.

Beginning with IDS version 11.10, Informix introduced several new onstat commands to enhance server maintenance and monitoring. Database Administrator can take advantage of these new onstat commands to perform their regular activities.

onstat -g rss Prints Remote Standalone Secondary (RSS) server information. The output of this command differs slightly depending on whether the command is run on the primary server or on the RS secondary server.

onstat -g cat Prints information from the Enterprise Replication global catalog. The global catalog contains a summary of information about the defined servers, replicates, and replicate sets on each of the servers within the enterprise.

onstat -g cdr config Prints the settings of Enterprise Replication configuration parameters and environment variables that can be set with the CDR_ENV configuration parameter.

For a complete list of onstat commands, see the IBM Informix Administrator's Reference Guide.

"sysadmin" database is created in "root dbspace" at server initialization. This database is required for the Scheduler API and Remote Administration feature. Until B5 drop, there wasn't any way of moving the sysadmin database safely to any other dbspace. A new SQL Admin API command in B6 drop now simplifies this task by allowing "informix" user to drop and recreate this database to any other dbspace.

If it's determined that root dbspace does not have enough space for storing task properties and command history information, you could move the sysadmin database to a different dbspace by using the "reset sysadmin" SQL Administration API command. This command drops the sysadmin database from root dbspace and recreates it in the specified dbspace.

Here's an example to move the sysadmin database...

1. Make sure the following message has appeared in the online message log after server startup:

SCHAPI: 'sysadmin' database will be moved to 'admindbs'. See online message log.

The internal thread, bld_sysadmin (seen via onstat -g ath), waits up to five minutes to obtain exclusive access to the sysadmin database. The progress of the bld_sysadmin thread is logged in the online message log.

Informix Dynamic Server version 11.10 introduced some new SQL keywords that are reserved for internal use. New reserved words might affect migration of an application. Existing application might encounter restricted functionality or unexpected results, if an SPL routine defined with the same name as new reserved words.

Although you can use almost any word as an SQL identifier, syntactic ambiguities might occur if you use an SQL word. An ambiguous statement might not produce the results you want.

Following is the list of new SQL reserved words:

Reserved Word

Description/Comment

ADMIN

A built-in administrative function available with the sysadmin database. Only the DBSA can run this function. The admin() function returns an integer that describes the results of the command.

AVOID_INDEX_SJ

An optimizer directive that does not use an index self-join path for the specified index.

INSERTING

The Boolean operator can use with trigger routines (and only in trigger routines and in other UDRs that are invoked in triggered action statements) in contexts where Boolean expressions are valid.

IDSSECURITYLABEL

The IDSSECURITYLABEL type of Dynamic Server stores a security label in a table that is protected by a security policy. Only a user who holds the DBSECADM role can create, alter, or drop a column of this data type. This is a built-in DISTINCT OF VARCHAR(128) data type, but it is not classified as a character data type because its use is restricted to label-based access control. A table that has a security policy can have no more than one IDSSECURITYLABEL column, and a table associated with no security policy can have none.

INDEX_SJ

An optimizer directive that use the specified index to scan the table in an index self-join path.

REFERENCES

A SQL clause allows placing a foreign-key constraint on one or more columns. The referenced column can be in the same table as the referencing column, or in a different table in the same database.

SAMPLING

In update statistics MEDIUM mode, SAMPLING SIZE keywords can be use to specify the minimum number of rows to sample for calculating column distribution statistics.

SELECTING

The Boolean operator can use with trigger routines (and only in trigger routines and in other UDRs that are invoked in triggered action statements) in contexts where Boolean expressions are valid.

SYSDBCLOSE

An SPL procedure use for CLOSE DATABASE or DISCONNECT statement to disconnect from the database

SYSDBOPEN

An SPL procedure use for DATABASE or CONNECT statement to explicitly connect to a database

TASK

A built-in administrative function available with the sysadmin database. Only the DBSA can run this function. The task() function returns a string that describes the results of the command.

UPDATING

The Boolean operator can use with trigger routines (and only in trigger routines and in other UDRs that are invoked in triggered action statements) in contexts where Boolean expressions are valid.

USELASTCOMMITTED

Onconfig parameter, environment variable or SET command use as a concurrency level

WITH

Increase readability

DBSECADM

A role reserved for LBAC administrative work

For a complete list of current SQL reserved words, see the IBM Informix Guide to SQL: Syntax.

Prior to Cheetah, a whole-system backup (onbar -b-w) was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was that you canrestore the storage spaces with or without the logical logs.Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.

One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality . With parallelbackup , multiple processes run simultaneously each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan serialbackups, which use only one process.

ONCONFIG parameter "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.

To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.

If BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.

To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1.

Also in Cheetah, storage manager efficiency isimproved by improvingdbspace ordering . "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's access to stored objects.

To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database . New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .

Also changes are made to ixbar file :New field with backup sequence integer (Last field in the line ) is added in ixbar file. Value of the new field is always "0" (zero) for log backup objects.

Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .

§

Backup Operation=================Only 1 checkpoint for all dbspaces - just before backup of rootdbs . Rootdbs is backed up first, without parallelism (no change here). Before image processor threads "arcbackup2" are started at this time, one for each dbspace (more threads running in parallel.) As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.

Restore Operation================Not much changed, only it is now parallel. Rootdbs is restored first, without parallelism (no change here). Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .

Beginning with IDS version 11.10, Informix introduced a new index scan feature, called “Index Self Join”. Priorto IDS version 11.10, a composite index scan allows to scan a single range of an index, based on start and end index key position. The Index Self Join access method permits scan many small subsequent ranges inside the composite index, instead of a large single range, based on filter on non-leading keys of a composite index.

On previous IDS versions, queries from table with composite index perform inefficiently where higher ratio of duplicate values in leading column than subsequent column(s) in the composite index. The Index Self Join is new type of index scan that can use subsets of the full range of a composite index. With this new index scan, tables can join logically to itself, and more selective non-leading index keys are applied as index bound filters to each unique combination of the leading key values.

Previously optimizer scans all index keys that fulfill the complete index key condition; otherwise it would make a sequential scan if the leading key had no WHERE clause conditions associated in the query. With this feature it will find the unique leading keys (low selectivity), and will make small queries using this unique keys and the rest of the index keys provided in the WHERE clause condition of the query.

AdvantageAn Index Self Join is beneficial for situations in which:

The lead key of an index has many duplicates, and

Predicates on the lead key are not selective, but predicates on the non-leading index keys are selective.

How ‘Index Self Join’ works Lets create a table ‘tab1’ with following table schema and load some data:

The simple view of the composite index ‘idx1’ on tab1(col1,col2,col3):

Prior to IDS v11.10, only possible filters could use on col1 (col1 >= 1 AND col1 <= 3) for positioning of the index scan:

Prior versionLower Filter col1 >= 1, Upper Filter col1 <= 3

Beginning with IDS version 11.10, we can use filters on ‘col2’ and ‘col3’for positioning of the index scan that allows skip unnecessary index keys at two ends of the index. IDS scan selective index keys that are relevant, which avoid scanning a large portion of the index. This strategy will improve the query performance by reducing the portion of the index needs to scan.

INDEX_SJUse the specified index to scan the table in an index self-join path. The optimizer is forced to scan the table using an index self-join path with the specified index (or to choose the least costly index in a comma-separated list of indexes for an index self-join path).

AVOID_INDEX_SJDoes not use an index self-join path for the specified indexes. The optimizer does not consider the specified index for scanning the table in an index self-join path.

One of the new features for Cheetah is the inclusion of severaladditional datablades. One of the more exciting of these blades is theBasic Text Search blade, BTS for short. This datablade allowsyou to do a lot of searches on text data found in char,varchar, lvarchar, blob and clob datatypes. The intent is to provide amore robust method by which to search text than current forms like theMATCHES or LIKE functions. The documentationfor the BTS blade can be found in TheBuilt-In DataBlade Modules User's Guide . Today we willgo over a little of the syntax that allows you to perform some neatqueries.

At the heart of the BTS datablade is a single function. The name isbts_contains, and it has the following syntax:

bts_contains (<column to search>, <searchcriteria>, {score # REAL})

The score section at the end is optional, and what it does is allow youto give a weighted value to each member of your result set.So let's presume that you had a varchar(255) column called remarks in atable called photos. If you wanted to search for the words "Apple" orthe phrase "Macbook Pro" , and you only wanted the score of 70.0 or greater you would do the following:

In version 10.00 IDS made available a new onstat command. The onstatcommand was onstat -g pqs. For those familiar with the old X-based toolcalled xtree, onstat -g pqs allows you to dynamically track a session'squery plan/tree. The syntax is as follows:

onstat -g pqs {0||<session id>}

The 0 will actually attempt to grab the current step of each query planfor all sessions.

Two considerations

onstat -g pqs only shows all phases that have currentlyprocessed, plus the current phase.

onstat -g pqs only shows active queries, once a query iscomplete the plan is gone.