If you have been following Informix events for the last three years, you have heard us talking about the Wire Listener. This is a Java daemon process that was originally implemented in 12.10.xC2 to allow seamless compatibility with MongoDB. A later variation of the Listener provided the REST interface to Informix. The recent (June 2016) 12.10.xC7 release now brings MQTT support to the Informix party.

MQTT is a simple and lightweight PUBlish-SUBscribe messaging protocol that is widely used in Internet of Things (IoT) applications. Since it is asynchronous, one common IoT use case is for sensors to publish readings at configured intervals to a predefined topic hosted by a MQTT message broker. Applications register interest in various sensor readings by subscribing to the particular topic for that sensor.

There are many implementations of MQTT, ranging from open source (Eclipse Mosquitto and Paho) to commercial implementations (IBM Message Sight). We've integrated a MQTT message broker into the Informix Wire Listener component, and so now with 12.10.xC7, applications can use one of the various client API libraries to directly publish messages to the Informix broker. There are client libraries for most programming languages/environments. This page (https://github.com/mqtt/mqtt.github.io/wiki/libraries) gives a very complete list of MQTT client implementations.

So, why is this of interest and how might it be used? Let’s take the use case of a sensor generating a reading each second. For this simple example, we will read the amount of free memory available to the Java VM. (Ok, this is not really a sensor in strict terms, but it makes the application source code easier to show. You can swap some other “real” sensor and the MQTT publish part of the application will work the same way) .

Before we look at the implementation, we will set up the database and start the listener.

1) We need a database table to store the data. MQTT readings can be stored in relational tables, in JSON collections, and in TimeSeries storage. For this example, we will store the readings in a relational table in our "test" database. Our table schema is simple – it contains just the sensor_id, the timestamp and the sensor reading value. Here is the dbschema output for the table "sensor_tab":

3) Start the MQTT listener. The MQTT listener is the same JAR file as the NoSQL and REST variants, and is started in the same manner. Just remember to start it with your MQTT properties file. You’ll want to listen on a different port than you are using for the NoSQL or REST listeners to avoid conflicts. Start the listener using a command similar to this:

4) Now that we have the database created and the MQTT listener started, we can look at the application that publishes data. In this example we will use the Java Eclipse Paho MqttClient synchronous API.

The program connects to the message broker (the Informix MQTT Listener) and then loops to get 100 readings (obviously a trivial number for this demo). Each time through, it grabs the system time, reads the available memory for the JVM “sensor”, generates a JSON document with the appropriate values, and publishes the JSON to the specified topic (the topic is the database.table_name). The program sleeps for a second and then repeats. Here is the sample Java code for this:

// Very rudimentary Java example demonstrating publishing of sensor data to the

5) Compile and run the program. Here is some debug output as the messages are being published:

6) Examine the data in the sensor_tab table. See that the sensor data has been “published” to this table:

This is a basic example, but you can see the process to publish data using MQTT is very simple. Some benefits of this approach? You can push data to your Informix database from a wide variety of languages, not just your usual suspects of C, C++ and Java, but also Go, Haskell, Erlang, Lua, Python, Ruby, and Swift, as there are MQTT client libraries for all of these languages and more. You don’t need to use a SQLI or DRDA driver - if fact, you don’t need to use SQL at all. And the process is asynchronous, and so your application can publish and go on. This is clearly not the right paradigm for many types of OLTP applications, but depending on your application requirements, this may be a good mechanism to get data inserted into your Informix database. Give it a try and let us know what you think.

If you have joined a recent event with the Informix development team (IIUG conference, Insight, Informix local events), you will have noticed that we have been creating many demos using Raspberry Pi devices. Last year, we created the Informix IoT Gateway Kit (https://github.com/IBM-IoT/iot-gateway-kit/wiki) (GWK), which is intended to help turn your Pi into an IoT edge device, working with Texas Instruments Sensor Tag devices communicating to the Pi over Bluetooth (BLE 4.0). We started playing with a Pi B+ device and then moved to a Pi 2. Just recently, we acquired one of the new Pi 3 devices.

The biggest difference between Pi2 and Pi3 is Pi3’s onboard support for wireless and Bluetooth networking. No more USB dongles! The clock speed is a bit faster than Pi2 and it is now a 64 bit processor using the next generation ARM (v8) chip. Unfortunately there is no 64 bit version of the Raspbian OS yet.

We retired our Pi B+ rather quickly after getting the Pi2 devices, as the differences between these two computers is significant. I did install the GWK and started a sensor data collection process on one of our retired Pi B+ devices. This week, I shut it down after 236 days of collecting sensor data. This is certainly not a long time for Informix users, but not knowing anything about the stability of Pi as a computing platform, I am happy to report that Informix on Pi chugs along just like it does on your favorite server platform:

In order to get the GWK operational on Pi3, it is not as simple as loading the MicroSD card from an earlier version of Pi. I tried doing this with both the Pi B+ and Pi2 cards. No go. We tried copying a fresh version of Raspbian to a newly formatted MicroSD card, but that did not work either. There was some kind of error with the network driver when doing this. We ended up going back to a NOOBS (New Out of the Box Software) install by copying this image to the MicroSD card. With NOOBS, it was then possible to select Raspbian, which was then installed and configured. This configuration must install the suitable drivers for the particular Pi architecture. After going through the reboot process (twice, just to make sure :) my Pi3 was up and running.

Now, what about Informix on that Pi 3? While we do have a version of Informix for 64 bit ARM devices (12.10.FC6 DE), this won’t work on the current Pi3 running Raspbian Jessie, as this is still 32-bit. Keeping using the 32-bit ARM v6 version (12.10.UC6DE ARM v6). You can get this from our Trials and Download page https://www-01.ibm.com/marketing/iwm/iwm/web/preLogin.do?source=ifxids. Look for the Informix Developer Edition for Linux ARM v6 32 (Raspberry PI) version:

We use our GWK to install and configure Informix. Following the GWK instructions (https://www.ibm.com/developerworks/community/blogs/cee6c09c-a315-4b04-ad14-57d6a60fa8bb/entry/setting_up_the_iot_kit?lang=en), copy the Informix tar file into a new iot-gateway-kit-depend directory and then run the git clone command to copy the GWK source files into your local directory:

Run the iot_install script from your new iot-gateway-kit directory and let the install do its thing. It will be installing node.js, Node-RED, and some other components, so it will take a while. After this completes, your Informix server will be configured and running. Just turn on your SensorTag and point your browser to http://localhost:1880 to see your Node-Red flow. When you activate one or some of the debug nodes, you will see your sensor data streaming in on the debug tab. There is also some analytics that get generated from your on-board timeseries data that can be viewed. You can point your browser to http://localhost:1880/dash to start a new dashboard. Some instructions for creating a dashboard can be found at https://github.com/IBM-iot/node-red-contrib-graphs. I don’t find this part real well documented, and so we will be looking to improve this. Drop me an email if you are having trouble here.

Lastly, here are some links around Informix and GWK that are not mentioned above:

POODLE, Logjam, Heartbleed, Bar Mitzvah, Freak – what are these? New boy bands? Nope, these are all high profile security vulnerabilities in OpenSSL. We are seeing a huge increase in security vulnerabilities, particularly in OpenSSL and Java. Many of these have been widely reported in the media, and not just the technical press.

As you can guess, IBM has an extensive process to report and track these issues in all of our products. My colleague, Dave Desautels, spends a large percentage of his work week reviewing these issues and managing the IBM process. For the issues that potentially impact Informix users, we issue security bulletins and create patches or fixpack releases containing updates to OpenSSL, GSKit, Java and other impacted open source libraries in our products.

This is not enough however. It takes your help to make our systems secure. It is important to actively deploy these “fixed” Informix product versions. There are many other instances of Java and OpenSSL in your environments. It is also important for you to update your operating systems and other software that also use these affected products. Having the IBM Java version that patches the POODLE vulnerability is great. Having other versions of Java on your system that are vulnerable to POODLE is not.

OpenSSL and Java generally release new versions each quarter. This is typically around the middle of the last month of each quarter. At the time of each release, they will publicly announce the vulnerabilities that are fixed in that particular release. There is a preference to hold the announcement of vulnerabilities until there are fixes available in order to reduce exploitation.

I encourage all of you to keep your Informix products up to date. In addition, please consider the other copies of OpenSSL and Java that reside on your systems as well. We all want to keep our companies out of the news when it comes to security vulnerabilities.

I know that many in the Informix community have asked about these ports, and so I am very happy to pass on the news…

Just released this week (29 June 2015) are the LONG awaited versions of Informix 12.10 and CSDK 4.10 for Mac OS X. The Informix server is version 12.10.FC5, and all the normal editions are available. The part number is CN4EVEN. CSDK 4.10.FC5 and JDBC 4.10.JC4W1 are bundled with the server. You can download the new server from Passport Advantage:

If you are looking for the standalone CSDK product, 4.10.FC5 is also available. It is part number CSML3ML.

All products were built on OS X 10.9.2 (Mavericks), which should make it easier to certify with OS X 10.10 (Yosemite) and the upcoming 10.11 (El Capitan) release. Certification for Yosemite is in progress. Stay tuned if you need official IBM certification.

The server and CSDK Developer Editions (DE) are being uploaded to the Trials and Downloads page ==>

We've been asked by several users about the MongoDB style authentication model and how it is supported using the new Informix REST interface. I recently went through the process and figured that this information may be useful to others trying to do the same thing.

In case you were not aware, we added a REST interface to the Informix 12.10.xC4 release. The new-in-xC2 jsonListener component was extended to also service REST requests. In xC4, you had to jump through some hoops to install and configure the REST listener, as it was located in the nosql_sdk.zip file in $INFORMIXDIR/bin. In xC5, there was a sight problem in the packaging of the REST listener as documented in this blog on April 1. With the 12.10.xC5W1 release, everything is working as expected! (Or should I say that the third time is the charm ). You can read more about the REST listener in the Informix Knowledge Center ==> http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.json.doc/ids_json_043.htm?lang=en

Starting the REST listener now is as simple as starting the jsonListener:

java -jar bin/jsonListener.jar -config etc/rest.properties -start &

What? This is the jsonListener! Yes that is correct. The jsonListener and the REST listener are now the same program. They are just started with different configuration files. Note the rest.properties file in the above example.

Now we should discuss some conceptual things about authentication and how this works. The REST listener enforces the MongoDB style authentication model. Both listeners leverage the MongoDB style APIs and so the MongoDB style authentication model was the natural choice here. There are some nuances here, however, as there are several authentication schemes used by our friends at MongoDB. In Informix 12.10.xC5, we support the MongoDB 2.4 authentication model. This is different than the MongoDB 2.2 model and is also different from the one that is implemented in MongoDB 2.6/3.0. Suffice it to say that the engineers at MongoDB are actively making changes in the area of authentication and roles in each release. We will be supporting the MongoDB 2.6/3.0 model in an upcoming drop.

For the present, we will focus on MongoDB 2.4 model. In this model, there is a system.users collection that contains authentication information for each database. This is an important distinction – With Mongo 2.4 style authentication enabled, you will need to specify each user's role in each database. (Because we already know how this story plays out, we do know that this changes in Mongo 2.6). REST authentication uses the HTTP Basic Authentication model. Authentication is performed by the Informix REST listener daemon, not the Informix database server. The user's password is passed from the client application to the REST listener as a Base64 encoded string. The credentials and user role is checked, and then if both are OK, the access is allowed.

We will start the example using a fresh stores_demo database. In this database we will define three users using the MongoDB addUser() command. We will make the ifxjson user an administrative user, and define users fred and barney to be normal users with readWrite permissions. We will perform this action using the MongoDB shell – mongo:

In this example, each of the three users (ifxjson, fred and barney) is added using the addUser() command. The parameter to this command is a document containing the user name, password, and role. Role is an array, and so several roles can be defined for each user. The last step is to query the system.users collection for the users that have been defined. Notice that the passwords are Base64 encoded.

Besides specifying the users, passwords, and roles for each database, the second step is to enable authentication in the REST listener. There is just a single entry to enable authentication – this will enable authentication for all databases that the REST listener services. The entry looks like this:

authentication.enable=true

On the local machine, authentication bypass is in place by default. This means that even if you set the authentication.enable property to true, it is ignored if you are connecting to the listener from the local machine. If you really want this enabled, even on the local machine, enable it like this:

authentication.localhost.bypass.enable=false

For completeness, the rest of my rest.properties file looks like this:

After changing the REST listener properties you will need to restart the REST listener. Depending on how you started the listener, this can be as simple killing the REST listener process and then restart it:

java -jar bin/jsonListener.jar -config etc/rest.properties -start &

Now let us test the authentication using curl. The examples will use the -u option to specify the user and then get prompted for the password. We are asking for the HTTP headers to be returned (-i option) as this may provide informative information for the example.

1) User fred specified with correct password, but not against the stores_demo database. This results in an error condition since fred is specified as a user only in the stores_demo database:

Note that with these curl examples, there is no cookie which saves the authentication information and so it needs to be provided in each REST call. Other, more sophisticated, web programs may cache the user credentials so they do not have to be provided after the initial connection. Of course this also means that your program may need to clean the cookie cache upon termination, especially in a general use machine.

Hopefully this example gives you a good idea of how to use authentication with the Informix REST interface. It really is very easy to do and should become a standard part of your database access checklist if you are considering using REST.

Why would you want to do this? By nature, JSON is supposed to be unstructured data. Consider the case where you want to use some analytics or reporting tools available only in the SQL world on your JSON data. IBM products like Informix Warehouse Accelerator (IWA) and Cognos come to mind. There are certainly many other tools that expect data in relational format. So, how to do this?

I recently worked with two users that needed to do exactly this. One wanted to use an OR-mapper package and the other wanted to use a reporting package they already use in their enterprise. My colleague, Erika Von Bargen, developed some transfer of information material on this topic. Her example gives an excellent use case on what can be done using SQL functionality to create a view that standardizes a JSON collection.

Note the inconsistent field names (“fname” vs. “firstname” vs. “first”) and missing fields (field “points” missing from documents 2 & 4, field “status” missing in documents 3 & 5). You can use SQL functions such as NVL, DECODE, CASE, etc. to help clean or standardize your unstructured JSON data. Here is an example of a CREATE VIEW operation using these operators:

CREATE VIEW users_cl(firstname, lastname, status, tier) AS

SELECT NVL(bson_value_lvarchar(data,"first"),

NVL(bson_value_lvarchar(data,"fname"),

bson_value_lvarchar(data,"firstname"))) as firstname,

NVL(bson_value_lvarchar(data,"last"),

NVL(bson_value_lvarchar(data,"lname"),

bson_value_lvarchar(data,"lastname"))) as lastname,

DECODE(UPPER(bson_value_lvarchar(data,"status")),

"A", "ACTIVE",

"D", "DISABLED",

"S", "SUSPENDED",

"UNKNOWN") as status,

CASE WHEN bson_value_int(data,"points") IS NULL THEN 0

WHEN bson_value_int(data,"points") < 10000 THEN 1

WHEN bson_value_int(data,"points") >= 10000 AND bson_value_int(data,"points") < 10000 THEN 2

ELSE 3 END as tier

FROM users

Note that NVL function is used to substitute all the variations of the first name fields with the consistent string “firstname”. The same operation on the last name fields provides a consistent string “lastname”.

The DECODE operation is used to generate a consistent “status”.

The CASE statement is used to clean up the “points” values into an appropriate loyalty tier and then renaming the resulting values as “tier.

With the view created and a SQL query performed on it, the result looks like this:

firstname

lastname

status

tier

Charles

Brooks

ACTIVE

3

Robert

Spencer

DISABLED

0

Silvia

Washington

UNKNOWN

1

Annette

Edwards

ACTIVE

0

Samantha

Jones

UNKNOWN

3

Careful observers may have noticed that the above example uses the bson_value_<data type> () functions to extract data from the JSON collection. This mechanism was the first implementation used to perform this activity. This approach is certainly still supported, but in 12.10.xC4 another mechanism was added. This approach uses the cast operator “:”, which reads more naturally for SQL programmers. The dot notation representing sub-documents is also fully supported. Here is the same view definition using the (more elegant) cast operators:

If your JSON data is fairly consistent but has some minor inconsistencies, this approach may work well if you want to make this data available for acceleration via IWA, for example. You can get even more creative with your CREATE VIEW statements, but this gives a good idea as to what kinds of things that you can do. Thanks Erika!

Now that we have all returned from the IIUG conference in San Diego (April 26-30), I wanted to offer thanks to the IIUG planning committee for planning another great event for the Informix community. The number of attendees may have dropped a bit from last year, but the enthusiasm and interest was more than ever. I personally was on the go, meeting with users and prospective users, pretty much non-stop from the Saturday evening customer and partner dinner right through my last tutorial on Thursday afternoon. Thanks to all that attended and made the event productive.

If you were in attendance, it was very easy to see Intel’s presence. I’m sure you can imagine that getting anything done with two large tech companies is an adventure (IBM is bad enough). Intel is truly interested in Informix on the IoT edge and made that message clear in their sessions and in their sponsorship of the conference. I am hoping this is the start of an exciting new partnership.

The demo in the Intel booth was a model of a smart building, with sensors for various building systems (windows, elevator, doors, solar panels, lights, emergency, etc). The demo simulated an emergency in the building – the elevator dropped to the ground floor, the doors opened, the window tinting cleared up and the lights were highlighting the ways to the floor/building exits. The logic for the actions was controlled by the Intel DK300 series gateway device. Here is a picture of the building at “rest”:

And another of the building during an emergency:

The Informix team jointed with the Intel team to extend the demo by developing some analytics around the emergency. The idea is that with a typical RFID card entry system it is possible to know the location within the building for all of the people. When the emergency happens, it is then possible to identify exactly where everyone is, and then track their movements from floor to floor as they exit the building. After the rush of people have left the burning building, it is then possible to identify who is left in the building and where they were last located. This information could be given to the first emergency responders upon their arrival.

The analytics are displayed as separate displays for each floor of the building, showing the name of each person on the floor along with their grid quadrant location. A gauge display showed the total count for each floor. The display updates in real time as people move from floor to floor. Here is a picture of the total building view – there are seven floors. The bottom-right section shows the rollup for all floors.

And a close up for one of the floors

Now for the technology bits – We installed the Informix 12.10.UC4 developer edition port for Linux 32. The DK300 machines have Intel Atom chips on board and so you use the same x86_32 Linux ports that we have been releasing. In case you missed all of the announcements about our gateway ports, you can find the Informix Developer editions ports, including the ARM v6 (Raspberry Pi), ARM v7, and Intel Quark ports here ==> http://www14.software.ibm.com/webapp/download/search.jsp?go=y&rs=ifxids

The analytics controls were developed using Javascript and jQuery. We are looking at adopting some kind of dashboard and will probably move to one of the open source jQuery based graphing packages (perhaps D3, Flot, or jqPlot – if you have any recommendations please pass them on…) to give us more flexibility. Connectivity between the Javascript app and the database is using the new Informix REST interface ==> http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.json.doc/ids_json_043.htm

There is a high likelihood that we will continue to expand the analytics around this smart building demo to help showcase the capabilities of Informix timeseries and our support on these SoC gateway devices. I am intrigued by the building’s rotating solar panels and have been thinking about what kinds of data can be collected and then utilized for decision making in that area. Stay tuned….

We had a great IIUG event in San Diego last week. I have been working on some comments to describe that event and then learned of the passing of Anita McKeithen on Monday (May 4). All of us that have been part of the Informix community over the past twenty years were touched by Anita. She was our focal point for our beta programs, IIUG and other conferences, hiring, and a bunch of other things that we are still discovering.

Most of us did not even know that she had been battling cancer for the past several years. She kept her positive and upbeat personality right up to the end. She was a great person and a real champion for our team and for Informix. Please keep her in your thoughts.

The hackathon at the University of Missouri – Kansas City, which was discussed in last weeks post has concluded. Four of us from the IBM Lenexa lab were invited to join the other corporate sponsors for the judging last night. The Computer Science department performed a pre-screening and selected their favourite six entries. These were the six entries that we reviewed, along with posters from the other project teams.

The six entries spanned some of the suggested use cases. These included an IoT use case that we (IBM) suggested, a YouTube v3 ranking system proposed by Adknowledge, a mobile vs. desktop device detection use case also from Adknowledge, 2 big data use cases from DST, and a network monitoring tools scheduling and maintenance use case from Ericcson.

The overall winner was a project called “Monkey MedSensor” which is a healthcare monitoring system that collects vital sign data and then maintains live and historical data for various patients. Since we have been heavily involved with IoT, it was nice to see an Internet of Things project win the event. The team created a Java virtual sensor that generated and sent vital sign data to the application. The team used NodeRed (http://nodered.org/) to wire the various parts together. They mentioned having a bit of learning curve with NodeRed, but then they were very impressed with its capabilities to hook together the various logic and components for these kinds of apps. (This is our experience as well. NodeRed is a very nice utility for creating IoT apps. Check it out!)

They used a local MongoDB database to store the live and historical patient data. The dashboard is a nice clean UI that shows current patient vital signs and then a selectable historical data window with each of the vital signs charted over the selected time period.

For a commercial application, I think we’d need to see some more analytics that correlate current plus the historical data and some alert processing that would alert someone when the threshold values are exceeded. Being an Informix fan, we'd also swap the MongoDB for Informix. Informix 12.10 MongoDB compatibility functionality should support this swap with no changes to the application. This is a good start for sure...

Here are some screen shots of the app:

1) The virtual sensor panel:

2) Part of the NodeRed flow:

3) The historical data window from the dashboard:

We are planning on additional hackathons this year. We will be conducting some of our own that will certainly be based on Informix and IoT. We also are looking to partner with business partners and universities to conduct more like this one. We'll be back at UMKC for their fall semester hackathon, and hopefully several more between now and then.

The University of Missouri – Kansas City Computer Science Department kicked off its spring hackathon Thursday (16 April). The hack runs until midnight Sunday (19 April) with judging next Thursday (23 April).

This is our second collaboration with the school on this event. In the event last November, we (IBM) were the only corporate sponsor. We had about 100 student hackers from two graduate level classes. Since we were making the hack suggestions, the use cases were all around the Internet of Things (IoT), one of our lab’s key projects over the last couple of years. There were some nice mobile IoT apps developed, almost all using Bluemix.

This semester, the hack has grown to involve five local (Kansas City) companies (IBM, DST Systems, Ericsson, Cerner, and AdKnowledge) and about 140 student hackers. The corporate use cases were about BigData and (naturally) IoT. Almost all of the students are now familiar with Bluemix, and so we expect that will be the hosting platform for most of the hacks.

This event has really grown just in the six months since the last hackathon. Dr. Lee, the UMKC sponsor, would like to conduct an event like this each semester. We’ll be more than happy to help! We’ll be back at the school to participate in the judging on Thursday and I’ll document some of the most interesting hacks.

By the way, here is the poster for the event. I really like the super hacker guy with the red cape….

Using the standard MongoDBcreate_index() function from your MongoDB compatible program connected to Informix 12.10, each part of the compound index will require 4K of storage. This is because we don’t know the data type of the data to be indexed, and so have to allocate index space for the worst case scenario, long string data. The max size of any Informix index is 32K, and so only seven parts for a MongoDB style compound key can only be specified. (Yes 4K * 7 is only 28K, but there is some overhead).

As an example of the standard createIndex() command that creates an ascending index on the state field and a descending index on the zipcode field:

db.collection.createIndex( { state: 1, zipcode: -1} )

Informix does not know which data types these two fields are, and so it uses the internal bson_get() function mechanism to create the index and then reserves 4K space for each bson.

So if you have a need to specify an index with more than 7 parts, what to do? We have developed an extension to the createIndex() function which allows specification of the data type. As an example of this Informix alternative:

This call uses the same MongoDB createIndex() command, but includes the data types for the two fields. (String for the values for state and int(eger) for the values for zipcode). Knowing the data types allows Informix to create the index more efficiently, using internal bson_value_<data type>() functions, and should allow many more parts to be included in the compound index.

Depending on the data types of the underlying data, this approach can allow a compound index to include up to 16 parts, which is the Informix limit for compound indexes. This is still short of the MongoDB limit of 31 parts, but much better than the 7 parts supported with the standard createIndex() syntax. The Informix extensions would not be recognized if the application is run against MongoDB, so if you are writing an app that works with both databases, put these changes in some kind of IF check to run only with the Informix server.

For the fans of our Informix REST interface, I wanted to let you know that we had a packaging issue that happened just as the 12.10.xC5 release was getting finalized. We have been packaging the REST components in a ZIP file called nosql_sdk.zip in $INFORMIXDIR/bin. The Tomcat jar file included in this archive (tomcat-embed-core.jar) is inconsistent with the rest of the Java files in the archive. So, if you unzip and configure this ZIP file, the REST listener will not work.

How to fix this? You can grab the whole nosql_sdk.zip from the xC4 release and use it (This is what I did). Or, you can grab just the tomcat-embed-core.jar from the xC4 release and use this. Not clean either way, I know.

This problem has been corrected, and a tech alert is on the way telling you this same information. I just wanted to let our REST fans know that there is no real listener issue, but more of a version incompatibility issue with the embedded Tomcat server.

We will often add an unused VARCHAR column when defining new Informix system tables so that we can use the column for some future use. This saves having to perform an ALTER TABLE operation on the table to add a column later, and in our case, this approach may keep us from some of the conversion-reversion issues that we have to deal with when changing system tables.

Now with Informix 12.10 and its new NoSQL functionality, another option is to add a BSON column to your existing tables. A BSON is the binary form of JSON. Think JSON in this case – the Informix engine can easily cast between JSON and BSON for you. The JSON/BSON will be a flexible container in your table, allowing the inclusion of any kind of data and any number of fields.

The customer table from the stores_demo database was defined a LONG (long) time ago. Suppose you want to add a cell phone and a fax number to the customer information. You could perform an ALTER TABLE operation and add two new columns, one for a cell phone and one for fax, and then deploy your new table and perform updates into the new columns for each customer. This is what we have been doing for years. OK, what happens if in the next week we need to account for some other piece of information, perhaps a pager number, or some GPS coordinates for the customers’ addresses? We would have to go through the same process again.

The idea here is to perform the ALTER TABLE operation once, and add a column of type BSON and then deploy the table. Once you have added a BSON column, you can add whatever additional data you would like with no Informix schema changes.

Let us look at the steps to do this. First, here is the customer table that we all know and love:

Perform the ALTER TABLE operation and add the BSON column:

See that the column extra_bson is added, in this case to the end of the table:

Now let us look at the values for customer_num 101 and then add a cell phone and fax number to that customer. Notice that we are setting these two new values in a JSON document (cast to a BSON) in the new extra_bson column:

{

cell_phone:”602-555-1111”,

fax:”602-555-2222”

}

When we query this back, why do we see the value as 8 for the extra_bson column? We need to tell the Informix server how we want this data returned. We can receive the entire JSON document back if we cast the extra_bson column to a JSON data type, as depicted in the first SELECT operation (below). If we want to return just the cell_phone field from the JSON as a VARCHAR data type, we can perform the query shown in the second SELECT operation (below).

The last SELECT operation (above) shows retrieving both the cell_phone and fax numbers as VARCHAR data.

This approach gives you the flexible schema benefit of NoSQL right inside your relational tables. Hopefully this gives you some ideas on “future proofing” your existing tables going forward.

This feature has to do with the SQL passthrough feature which allows running a SQL statement using the MongoDB calling mechanism. In earlier 12.10 releases, there was no automatic cursor instantiation when using the SQL passthrough feature was invoked. If run using the findOne() call, only a single row was returned. If run using the find() call, all rows would be returned. If the result set included a huge number of rows, the system resources could be overwhelmed. This feature implements the same “default” cursor creation and manipulation for queries executed in this manner. There is no user or program manipulation to make this behaviour happen. (no API, no properties file setting, etc).

Here is an example using a table containing post office zipcode information. The (MongoDB-style) call to find the number of rows shows there are 29,467 rows. The SQL passthrough style query to return all the rows from the same table now automatically creates a cursor and returns back portions of the result set that are managed by the API.

12.10.xC4 supports using Informix basic text search indexes with JSON documents. These indexes are particularly useful when the text is larger than supported with regular indexes (255 bytes combined).

There are different commands to create and use text indexes, depending on whether the data to be indexed and manipulated resides in a JSON collection or relational table.

Case 1 – data exists in a JSON collection. In this case, the index is created with the standard ensureIndex() command with the “text” option. In queries where the index is to be referenced, the $text query operator is used with the find() or findOne() command. Here is a MongoDB shell example of creating an index on a JSON collection and then using the index in a query:

Case 2 – data exists in a relational table. In this case, the index is created with the new Informix createTextIndex command. To query using the new index, use the $ifxtext query operator with the find() or findOne() command. Here is a MongoDB shell example of creating an index on a relational table and then using the index in a query:

12.10.xC4 supports the creation of geospatial indexes and the querying of this data from MongoDB JSON style applications. GeoJSON has become an industry standard (http://geojson.org/geojson-spec.html) for specifying collections of geographical features. Both MongoDB and Informix support this representation for spatial data.

An example of a GeoJSON data record for the Lenexa, KS post office, showing the location as a Point, would be:

{ "city" : "LENEXA",

"loc" : { "type" : "Point",

"coordinates" : [ -94.773145, 38.961896 ] },

"pop" : 6090,

"state" : "KS",

"_id" : "66219"

}

The GeoJSON format differs from the legacy coordinate format. The same data presented in legacy coordinate format and GeoJSON format:

A 2sphere index is required to support spatial queries on the GeoJSON data. The index is created using the MongoDB ensureIndex() command. The command when run with MongoDB supports legacy coordinate pairs as well as GeoJSON objects, while when run with Informix supports only GeoJSON objects. In addition, the index requires the geospatial data to reside in JSON collections (as opposed to relational tables) within the Informix database. An example showing the creation of a 2dsphere index on the “loc” field of our city_info collection (format shown above) would look like:

db.city_info.ensureIndex( {"loc" : "2dsphere"} );

Queries on this data are performed using the usual find() or findOne() commands. Special geospatial query operators ($geoWithin, $geoIntersects, $near, $nearSphere) are used to specify the search criteria. As an example, if we wanted to find the closest port office to the IBM Lenexa office, the query would look like:

> db.city_info.find(

... { "loc" :

... { $near :

... {

... $geometry : {

... type : "Point" ,

... coordinates : [ -94.777480 , 38.926404 ] },

... $maxDistance : 5000

... }

... }

... }

... );

The query uses the $near query operator and specifies the comparison point (IBM office) along with a $maxDistance to search (5000 meters). The result returns one post office located within 5000 meters of the specified location:

Spatial queries can also be used within the aggregation framework using the $geoNear operator. $geoNear must be the first stage in the aggregation pipeline and can be used to pass the documents matching that search criteria on to the next stage. As an example (using the same general query presented above), it you would like to find the post offices within 10000 meters of the specified point that represent cities with populations greater than 30000. Here is that example:

> db.city_info.aggregate([ { $geoNear: {

near: { type: "Point", coordinates: [ -94.777480 , 38.926404 ] },

distanceField: "calculated",

maxDistance: 10000,

query: { "pop" : { $gt: 30000 } },

includeLocs: "loc",

num: 5,

spherical: true }

} ])

The near operator is the first stage and returns the points within 10000 meters of the specified location. The query, {"pop":{$gt:30000}}, which looks for documents with a population greater than 30000 is next applied. The entire result set is limited to 5 documents. You cannot use dot notation for the aggregation distanceField and includeLocs parameters in this release. The returned results from our query are:

{

"result" : [

{

"city" : "OLATHE",

"loc" : {

"type" : "Point",

"coordinates" : [

-94.820359,

38.886548

]

},

"pop" : 31694,

"state" : "KS",

"_id" : "66061",

"calculated" : 5780.15788384441

},

{

"city" : "OLATHE",

"loc" : {

"type" : "Point",

"coordinates" : [

-94.775168,

38.873287

]

},

"pop" : 32845,

"state" : "KS",

"_id" : "66062",

"calculated" : 5900.117670562104

},

{

"city" : "OVERLAND PARK",

"loc" : {

"type" : "Point",

"coordinates" : [

-94.68414,

38.958954

]

},

"pop" : 36187,

"state" : "KS",

"_id" : "66212",

"calculated" : 8862.373581047914

}

],

"ok" : 1

}

Geospatial query operators are not supported for sharded data in this release.

Pipeline aggregation operators

The MongoDB style aggregation pipeline is a sequence of aggregation operators where the output of the first “stage” is used as input to the second stage. This pipeline (read left to right) is a mechanism to perform various operations on the results of a query. Some examples are projecting the set of fields to include or exclude ($project operator), $sort, $group, $skip and $limit. This list is not exhaustive. Many of these operators are valid for use when the data resides in either JSON collections or relational tables. Some examples of aggregation operators, some in pipelines and some not:

Example 1 – querying the city_info collection, project out all the fields except for the _id field, and limit the results to 2 documents ($project and $limit are pipeline operators):

Example 2 – query the city_info collection and find the states with the smallest population. This action is performed by $sum(ming) the various “pop” values of each document and grouping the sums by state (temporary field totalPop). The totalPop values are then sorted and the first three states and total population of each are returned ($group, $sort and $limit are pipeline operators; $sum is a group operator)

12.10.xC4 adds support for several MongoDB style array update modifiers. $each, $slice and $sort are supported in this release. These operators can only be used with data residing within JSON collections in the Informix database.

The $each operator us used with the $push operator to add multiple values to an array. In the following example, we insert some documents which have an array element, query the document collection, update the array in the “foo_b” document by adding elements 45, 55 & 65 to the array using the $each update modifier, and then finally re-query the collection to see the updated array. The interesting lines are highlighted in blue.

The $slice modifier limits the size of the array as part of a $push operator. It can be used to limit the number of elements from either the front or the back of the array after the array is updated. It can also be used to remove all the elements from the array.

The $sort modifier sorts the contents of the array and is also used in conjunction with the $push operator. A value of -1 ($sort : -1) sorts the array elements in descending order, while a value of 1 ($sort : 1) sorts the array elements in ascending order.

New export and import commands - exportCollection and importCollection

Informix 12.10.xC4 introduces two new Informix specific commands to export or import collections from/to the Informix database. These commands are run using a MongoDB API (generally the MongoDB shell) though the JSON Listener to export or import JSON document collections. These commands are optimized to perform bulk operations through the Informix APIs to the database. These commands will perform better than the MongoDB export and import utilities (mongoexport and mongoimport) when run against the Informix database. The other difference between the MongoDB utilities and the new Informix commands are that the MongoDB programs are stand-alone utilities that run on the command line, while the new Informix commands can be run from within any MongoDB API.

A MongoDB shell example of using the exportCollection command to export the city_info collection as a single jsonArray structure into a file in the /tmp directory:

Establishing the strategy for calculating the sizes of your Informix databases

MongoDB collects and returns very accurate information on the size of each database. Informix does not typically deal with database sizes at the byte level, but rather in the number of allocated operating system pages. Since some MongoDB applications are written to rely on accurate information on database sizes however, 12.10xC4 provides a mechanism to return this information.

Determining the number of documents in a collection (across all collections) is simple and fast. Determining the exact size of each document in bytes is expensive, and thus determining the size of an entire database in bytes can also be an expensive operation. If this is absolutely necessary, there is an option for this. Alternatives are also available that perform a sampling on the size of some number of documents and then multiplying the sampled document size by the number of documents.

The size of the database is returned using the MongoDB listDatabases() command. The sizeStrategy option provides four settings to allow the strategy to be used (and corresponding work to be performed by the database engine) to determine the result. The settings are:

estimate – which estimates the average size of the documents in a collection by examining 1000 documents (or 0.1% of the number of documents if there are less than 1000 documents). This is the default setting

estimate: n – which estimates the size of the documents by sampling one document for every n documents in the collection

none – which lists the databases but does no size calculation

compute – compute the exact size of the database by scanning each document in every collection. This can be a very time consuming operation!

Running the listDatabases() command with no option, utilizes the default sizeStrategy option with the estimate setting:

> db.runCommand ( {listDatabases: 1 } )

{

"databases" : [

{

"name" : "stores_demo",

"sizeOnDisk" : "59392",

"empty" : false

},

{

"name" : "zipcodes",

"sizeOnDisk" : "8136806",

"empty" : false

},

{

"name" : "city_info",

"sizeOnDisk" : "655360",

"empty" : false

},

{

"name" : "test",

"sizeOnDisk" : "24880",

"empty" : false

}

],

"totalSize" : "8876438",

"ok" : 1

}

Running the listDatabases() command with the sizeStrategy option with the none setting results in the size being reported as 0:

> db.runCommand ( {listDatabases: 1, sizeStrategy: "none" });

{

"databases" : [

{

"name" : "stores_demo",

"sizeOnDisk" : "0",

"empty" : true

},

{

"name" : "zipcodes",

"sizeOnDisk" : "0",

"empty" : true

},

{

"name" : "city_info",

"sizeOnDisk" : "0",

"empty" : true

},

{

"name" : "test",

"sizeOnDisk" : "0",

"empty" : true

}

],

"totalSize" : "0",

"ok" : 1

}

>

The behaviour can also be specified in the jsonListener.properties file. This will define the strategy for all database size requests during the listener session. The property command.listDatabases.sizeStrategy has the same behavior as described above. The allowable settings for this property are estimate, estimate : n, none, and compute. An example of setting the property for the estimate behaviour would be:

One of the basic functions that a relational database provides that a NoSQL database does not provide is transaction support. Because of its long history in the OLTP space, Informix can offer transaction support to MongoDB applications running against the Informix database. Because MongoDB does not provide equivalent functionality, applications wanting to utilize database transactions will need to be modified.

Transactions are managed by the transaction command via the db.runCommand() command. Transaction support is “enabled” – the transaction is started. After subsequent operations, the transaction is either committed or rolled back. The transaction is on the “local” collection only – transactions are not supported against a sharded collection.

Options for the transaction command are:

enable – which enables transaction support in the current database and starts a transaction

commit – which commits the current transaction

rollback – which rolls back the current transaction

disable – disables transaction support in the current database. Subsequent commit or rollback operations would be ignored.

status - reports whether the transaction support is enabled or disabled.

In the following example, the collection ‘foo’ is queried, a transaction is started and two documents are added. The transaction is then rolled back and the foo collection is re-queried. The two documents that had been added were not present since the transaction that had been started, prior to the two inserts, was rolled back. Here is the example:

The JDBC 4.10.xC4 driver that ships with Informix 12.10xC4 includes a new Informix specific class, IfxBSONObject, to interact with BSONs and JSONs in the Informix database. There is a new demo program ($INFORMIXDIR/jdbc/demo/ IfxBSONObjectDemo.java) that demonstrates working with this new class.

Since JSON is really just a variable length string type, interacting with them involves normal string manipulations. Here is a fragment from IfxBSONObjectDemo.java that shows creating a SELECT operation on a table containing a JSON object and then iterating through the result set and displaying the JSON contents:

/*

* SELECT FROM JSON TABLE

*/

sql = "SELECT * FROM jsontab";

System.out.println("\n [QUERY] [jsontab] " +sql );

pStmt = (IfxPreparedStatement)this.conn.prepareStatement(sql);

rs = (IfxResultSet) pStmt.executeQuery();

count = 1;

while(rs.next()) {

// * JSON data can be retrieved from the server with

// simple getString() method

String result = rs.getString("c1");

System.out.println(" [RESULT] [" +count++ +"] "+result);

}

BSON is a more complex data type, since it contains binary encoded data values. The example code to load three documents into a table bsontab looks like:

System.out.println("*** Create tables for demo ");

executeDDL("CREATE TABLE IF NOT EXISTS bsontab (c1 BSON)");

executeDDL("INSERT INTO bsontab VALUES

(\"{id:10,total:50}\"::json::bson);");

executeDDL("INSERT INTO bsontab VALUES

(\"{id:20,total:50}\"::json::bson);");

executeDDL("INSERT INTO bsontab VALUES

(\"{id:30,total:50}\"::json::bson);");

The documents are constructed as JSON documents and then cast to BSONs using the explicit cast <string>::json::bson.

Some of the same demo code to query the BSON table follows. The query string is built into the sql variable. The project list includes the BSON value (c1), explicitly as a BSON data type (using the explicit ::bson cast). Once the query string is built, it is prepared and executed. We iterate through the result set by calling the new resultSet.getIfxBSONObject() method which returns the BSON as an IfxBSONObject data type. This type’s toString() method converts the BSON to a string value for display.

/*

* SELECT FROM BSON TABLE

*/

sql = "SELECT c1::bson FROM bsontab

WHERE bson_value_int(c1,'id') = 30";

System.out.println("\n [QUERY] [bsontab] " +sql );

//Prepare the query for execution

pStmt = (IfxPreparedStatement)this.conn.prepareStatement(sql);

// * Cast resultset to IfxResultSet to access Informix extension

// to get BSON Object

rs = (IfxResultSet) pStmt.executeQuery();

while(rs.next()) {

//* get the BSON object using Informix extension to PreparedStatement

// note that we use the index since the query returns an

// expression.

bsonObject = rs.getIfxBSONObject(1);

//* calling IfxBSONObject.toString() to deserialize the object for

// readable string;

System.out.println(" [RESULT] [" +count++ +"]

"+bsonObject.toString());

}

//store the last BSON Object into instance variable to be consumed

//by other parts of this demo.

this.bsonObject = bsonObject;

The complete demo program shows examples of other CRUD operations working with the IfxBSONObject class. The methods ensureBsonObject() and ensureBsonObjectBytes() can be used to verify the constructed BSON object.

In a visit to Bolivia last month to support a local Informix event, I met with a group that was actively using Hibernate with Informix. They knew about the Hibernate "patch" that has been hosted on the IIUG web site (iiug.org/opensource). They did not know about the refresh/update to that work.