In this case once registered the server would alert the client, instead of the client having to constantly poll the Database. Once alerted, the Client would then select the smart large object and parse the actual alert.

Unlike smart triggers, this functionality is not limited to JDBC, but is usable by CSDK.

​Well, I quickly saw the Ranking he listed and realized we had an issue. Now to the website's credit, it clearly posts how it arrives at it's rankings, and makes no promises that it is an accurate reflection of existing usage, though it does think it is a potential bellwether for future usage. With that said here is the link for how db-engines.com gets their ranking:

db-engines is ultimately at the mercy of the data it uses, and unfortunately this points to a problem with data. Please note this is not a die hard person attempting to say that Informix should be in the top 5 or anything similar, it is certainly possible that a ranking system with more accurate data would still show that Informix would be near the bottom of a top 25 rankings system, this at being said here are a quick look at 3 problems with the data, or maybe 2 problems with the data.

First off IBM products are uniquely handicapped by the way this ranking is achieved Why? Well because IBM's brand of marketing is to market concepts and not actual products. As such about the only time there is any notice of significance is during product release dates, and sometimes not much even then. This is not a criticism of IBM Marketing per se, just a realization that the result of said marketing will depress the results of this ranking process. If you are looking for Website existence, it suddenly means Informix , and DB2 for that matter, get far less mention because the Marketing department just doesn't release much materials.

Second problem is that almost all technical discussion of Informix occurs on iiug.org or in PMR (proprietary), iiug.org mailing lists are relatively active and consistent, but since neither PMR work nor iiug.org discussions are published to the two sites db-engines uses that results in a very depressed score as well.

Third has been the failure of people such as myself to spend time talking about the product. I have a blog and it used to be read, but work life issues got in the way, and until I read his ranking work I had let stuff slide. Had the amount of work I do with Informix decrease? Not at all, but my frequency of talking about it in a way that websites like db-eninges.com could track had dropped to nearly nothing.

So the moral of this story is not that db-engine is bad, merely that it is ultimately affected by the limitations of the information

A secondary Moral would be, talking about the product if you use it tweet about it or blog about it, ultimately db-engines is driven a lot by grass roots information, and Informix users, and those that like it have been our own worst enemy in not pointing the product out.

Yes this blog is slowing working it's way back to being active. Then intent will still be to help discuss issues revolving around the Informix Database platform and work. Again with both an eye towards the Developer and the Administrator. After all if you are not writing code against a Database what is the point in an administrator.

What is exciting to me is the ability to implement the Pub/Sub model of event driven programming seamless into Informix code.

Think about something like inventory management. Previously within Informix, you would need to actively poll if you wanted to alert that inventory had fallen below a certain amount, now with this functionality you will be able to be automatically alerted of the event. A very welcome addition to the feature set IMHO.

So I ran into an interesting issue last week. A customer couldn't alter a table. Non-Exclusive access. Sounds pretty normal right? I mean after all it's not like a DBA doesn't see this fairly often.

The Normal routine is for a DBA to run the following:

Figure out the partnum for the table being altered, best way is to run the following SQL from the DB the table is in:select hex(partnum) from systables where tabname="<tabname>" Where <tabname> is whatever the table name is you are altering.

using onstat -k |grep <partnum you got above> , find out who is holding the locks.

contact user about locks, or if you are under a rush, just kill the offending user/session

However what happens when you still get non-exclusive access after doing the above?

You need to have two considerations then. #1, check for referential integrity issues, and #2 look for open cursors?

RI can be checked using dbaccess, or dbschema, but how do you check open cursors?

The easiest way is with

onstat -g opn

In the situation I wan into, we had several open cursors with transactions running against the child table holding RI.

This allowed us to identify where applications were forgetting to close the cursors. As soon as those cursors were closed, the alter table was successful.

From a development standpoint, this brings up a salient point to always remember, close your cursors.

Still this one is pretty cool, because the lab will be talking about application development with Data Studio and Informix. Even if you don't use Data Studio, and even if you don't plan to, I highly recommend attending. The more interest a call like this generates, the more calls like this (i.e. Application Development) will occur.

This session will take on a customer use case and go through each step of the development cycle demonstrating how using IBM Data Studio will help to improve the productivity of Informix users. IBM Data Studio has something to offer for each type of user, a demo for Java developers, the rich Eclipse development integrated development environment and utilities to support Java application development. SQL developers and Administrators will learn how the IBM Data Studio helps to improve productivity by using various features like object management explorer. This powerful stored procedure debugger and deployment manager allows you to deploy multiple scripts on test and production all at once.

So the new version of Informix is fast approaching, and IBM has a webcast on that very topic. Below you will find a summary, and the link to sign up.

The New IBM Informix: It's Simply Powerful

Date: Tuesday, March 5, 2013

Time: 10:00 AM PST

IBM Informix is exceptional database software that is well known for its superior performance, high availability and efficiency, minimal complexity and lower computing costs to power online transaction processing (OLTP) and decision support applications for businesses of all sizes. Informix incorporates design concepts that are significantly different from traditional relational platforms, resulting in extremely high levels of performance and availability, distinctive capabilities in data replication and scalability, and minimal administrative overhead.

The newest release offers clients and partners the ability to take their business into the future, right now! Whether you are looking for help maximizing your daily business activities through more efficient operational analytics; deploying applications to the private cloud; working with sensor or meter data; or just looking to increase your productivity and usability, the new release brings you a cost-effective, simply powerful solution that addresses all your data management requirements.

It's a new, circa ESQL/C 3.10, global variable. FetArrSize, indicates the number of rows to be returned per FETCH statement. This variable is defined as a C language short integer data type. It has a default value of zero, which disables the fetch array feature. You can set FetArrSize to any integer value in the following range:

So I've been working on a Proof of Concept with the new Informix Warehouse Accelerator. Part of that is getting data from source systems , and often those source systems are on another Database system. When doing work with that you inevitably use an ETL tool of some sort, and the customer I'm working with uses IBM Datatastage.

I'm using an older version of Datastage, and the ODBC driver is slow. SO I was looking for a quicker way to load, while at the same time not taking up any space, except inside the database. So I wanted to share the method used:

Now one thing to keep in mind is that if trusting is not enabled then the window doing the cat and redirect for the pipes may need a password entered. So far this has been at least 8x faster than ODBC, and note the other peice. I only used 1 pipe, Obviously you can use as many pipes as Datastage will allow. I also didn't use PDQ which would have increased performance for my fragmented table too.

So I got back from this years IIUG. It was a blast, as usual. The IIUG is making efforts to increase the number of presentations that are applicable to developers. This year it included presentations on Database programming with PHP, a section on open source coding with informix, programming with drupal, and then I did a best practices presentation for application developers.

I don't know when the IIUG will make the presentations available, but you do need to be a member to get them, and the process is simple, and free. Just go over to

For those of you who also follow DB2/Z this is effectively the Smart Analytics Optimizer for Informix. The above link is to our documentation on the product.

I've had the opportunity to work with members of my own team in the Accelerated Value Program on the Ultimate Warehouse edition for the last month. It can be made transparent to the application developer and Flat out flies. Once setup and enabled IWA has generated, for my team so far, a performance improvement of more than 66 times the speed of the same informix query when not accelerated. That query had already been significantly improved because of the Star/Snowflake support in the Optimizer added in 11.70.FC1. Essentially IWA has turned the OLAP/ROLAP queries we were running into ROLAP/OLAP queries with OLTP response time.

If you noticed above, possibly the best part of this is that it is transparent to the application, so if you are using IBM Cognos, or any other Business Intelligence type of tool, you just have to request the DBA to turn IWA on for you, you don't have to get any special "IWA aware" tool to use the product.

I will definitely talk more about the IWA as I have time and am allowed. For those of us that have to write against and Use warehouses or Data marts, this is a big deal.

So I've been working with some embed ability features, and was reminded of a feature that I think is very cool, but so far not used very much. It's called solara, and Guy talked about it some when he had this blog. Since it's been a few years, I thought it might be a worthwhile to dust off one of his old articles, and remind everyone about the ability to embed a webserver inside of Informix.

So the next question we have , now we know why we encrypt, is what to encrypt.

Ultimately we have only two areas to encrypt. The first area is encrypting our network connection, and the data that goes against them. The second is encrypting the actual data when it is "at rest", which is an industry term indicating encrypting the data where it had permanent or near permanent storage.

Different Compliance standards requests different things. Some only care about the storage, others only the "in flight", and some require both. You have to know what your requirements are if you only want to do some encryption, versus going wholesale.

Important to remember is that any encryption requires a performance cost. Some less than others, but a cost nonetheless.

My next post on this comment will be the network options for encryption.

Note the above is hopefully the first in a series, as it is using the consumption of web services to get the information. The demo is not interactive (ie you cannot perform searches or insert data), but since you are exchanging information with a web server, the possibility of modifying the example is certainly doable.

So I'm sure I can hear now, why even bother with asking the question "why encrypt" , we've been told to do it, and so we need to. You can certainly look at it that way, but the different reasons to encrypt impact application developers differently. In some cases it means you have to make changes to you application, in other cases it means major changes to your application, and in even other cases it means no changes to your application at all.

A quick legend for those unfamiliar with the terminology of securing data:

"over the wire" means that you encrypt or secure your network connection, using SSL is a common method of "over the wire" encryption.

"at rest" means where your data is stored is secured. I would love to call this "disk level encryption", but the truth is DLE is a method used for encrypting data "at rest", and therefore just causes too much confusion when used.

Here are the primary reasons to get into the encryption game, and the impact it likely has on an application developer:

Security : As you can imagine this is a broad one and can mean a bunch of different things to different people. Let us consider the NSA or Military, in this case you may be asked to meet a certain security level even though your application has little or no "secure" data. This will likely require all data to be secured "at rest" as well as "over the wire", and may even require changes to the application to allow for two factor authentication or other types of security.

Regulatory Compliance : And this one is a can of worms, we are finding out more every day as to which law applies to which customer and what they must do about it. Depending on the choices your company makes on this it can be relatively painful as an application developer to nearly painless.

Protecting against physical theft : This one is the least painful for application developers, as in most cases it only means securing data "at rest".

While there are definitely others that may come into play, these are the primary considerations as to why we should encrypt. In my next post on this topic we'll cover what we can encrypt.

So I've definitely gotten enough feedback to realize discussing what is consuming most of my time these days, especially as it relates to application development. So here is my plan, and I will link to each subsequent post so this is also an Index. Since I'm currently working with Informix and IBM's Database Encryption Expert I will eventually spend a fair amount of time discussing implementation methods and strategies using this product. I'm certainly not saying that IBM DEE is encrypted

Please note, what to encrypt section is not about what particular data you should encrypt but instead what type of things you should encrypt. I expect this to be a long running series, and may interleave some other stuff in the interim.

So as it has been painfully obvious, I haven't been blogging particularly frequently over the past few months. Now on the one hand, you could just say that the "honeymoon period" for me on the blog is over, but the truth is I've been buried in regulatory compliance stuff and other security related issues. Of late, I've been working especially hard with a customer on implementing IBM Database Encryption Expert and Informix. It's been challenging learning a product that is focused at being integrated into the OS layer, but fun too. Of late though, I've wondered how much that might apply to application developers. Sure the intent is to be as transparent as possible, but if you data is have to be encrypted/unencrypted, do you want to know about it? And if so how much?

So anyway, I'm asking for feedback as to whether you would like to hear a bit more about encrypting databases, the methodologies, and what I firmly believe is the best choice for Informix, well ok all, databases.

In case you missed it, and I'm guessing you haven't, Informix 11.70.FC1 was released yesterday. It has a lot of very nice features for Developers which I will be covering over the next few months. I am very excited , having been involved in the Beta, to see this version go live. It has some great features that will benefit a Dveloper, both directly and indirectly.

Few companies have a meaningful way to measure the value of IT and IT projects before making an investment. Technology providers frequently talk about features and functions but sometimes forget to help potential clients understand benefits.

Recently, IBM commissioned Forrester Consulting to examine the total economic impact and potential return on investment (ROI) that organizations may realize by deploying IBM Informix database software. The study uses a comprehensive methodology to bring third-party, objective ROI analysis to organizations considering the use of Informix.

The conclusion? IBM Informix delivers high performance and cost efficiency, including administration efficiency, reduced downtime, improved server utilization, and reduced support costs. But don’t take our word for it. Read the report for yourself.

So have you ever wanted to have an easy way to know how long you SQL waited on I/O? What about the actually number of sequential scans for an individual query? How about the average execution time of a query without running a script and using time() or timex() as part of the equation. I know I have. And until We got to informix 11.10 and above, we didn't have that opportunity, at least not natively. Technically we had an old IBM/Informix product called I-SPY that offered most of the functionality that you might want, but it was :

High Overhead

A separate application to manage

Beginning in version 11.10 we have the ability to handle that information natively. It handled by a new ONCONFIG variable called SQLTRACE. SQLTRACE can be set like the following:

# SQLTRACE - Configures SQL tracing. The format is:

# SQLTRACE level=(low|med|high),ntraces=<#>,size=<#>,

SQLTRACE level=high,ntraces=1000,size=2,mode=global

I pulled that out of one of my test boxes, and you can see I have mine set to high , that mode is slightly more overhead, but not a huge amount, however it gives you a lot more diagnostic information.

The best thing about SQLTRACE is you can set it dynamically. You can use OAT to set it, or you can set it yourself using the sysadmin api. The syntax is fairly easy, so to mimic what I have above it would be

The next question of course is how do you access this information. You have two ways, plus OAT, to look at the info, the first is through onstat.

In this case it's onstat -g his and has the following type of output:

This one is just showing a DATABASE connection so nothing particularly noteworthy, but it still shows you the format that you will see for all queries.

You can also see that like an onstat -g sql, we trap the error number. And yes it looks like I have ER turned on somewhere, but didn't actually create the syscdr database.

If you look a little closer though, this output will also show you the caveat to this functionality, namely the info is in the equivalent of a circular linked list. So looking at the above, trace number 1001 will overwrite your first entry here. Note that OAT comes with a function that will let you write this info to disk, thus saving the info to do historical information, or a poor man's auditing of queries.

The other option to gather info is by way of SQL, specifically querying the syssqltrace table. The output is not as pretty, but it allows you to search on particular session ID's, or most anything in the above output.

All in all this is a great advancement if you are trying to track down poor performing queries.

So the last time I actually talked about something, versus just taking note of something. I was talking about what was available from a Memory profiler standpoint for sessions. But what about just seeing what each statement in an SQL is taking up from a baseline perspective? Well the good news is that Informix already has something that answers that question, and that tool is onstat -g stm.

So what do you see when you run onstat -g stm?

As you can see onstat -g stm has some very useful stuff, as well as some stuff that people like me (i.e. Support) care about.

The first thing you will notice is that this onstat breaks out the SQL statements per session. That's right you could run onstat -g stm <session id> as well to get the info for a specific session.

Now looking at the columns you will see sdblock (useful for tech support guys like me in certain situations), heapsz and statement.

The Statement is self explanatory, and the heap size is essentially the size of memory in bytes of you heap, which essentially breaks down to the bulk of the memory your SQL is taking up.

Note there are a few gotchas. The biggest one is that it doesn't really drill down. So if you are running SPL, it tells you the size of the SPL, not each query inside it.

All and all though a very nice command for tracking your sessions sql.

Next time we will talk about looking at DB resources your SQL takes up using SQLTRACE, and syssqltrace.

A couple of weeks ago I asked about the usefulness of a Memory Profiler for Database Sessions. While I want to have another blog post on that perhaps hashing out what application developers would really like I wanted to about what we have as an option right now if we are trying to profile memory. While it often seems like we have very little info, we really have quite a bit, just as the Feature Request suggest, not always useful.

So what tools do we have at our disposal today for memory profiling?

onstat -g ses <session id>

You could probably make arguments for other onstat's but these really is it for memory profiling.

So let's take a quick look at onstat -g ses

In this instance we are going to do the following:

select * from customer;

On my server this happened to be session # 36, so let's take a quick peak.

.

So here we have the output. Make sure you see tid on line two of our output, happens to be 350. Thread ID, or tid, can be used for quite a few onstat commands all of which are useful.

Now where this can be useful from a memory profile section is the "Memory pools" section, specifically the list of the components. For all intents and purposes this is the breakout of you sessions memory. The bad news is that the components were all done based on internal structures instead of things that would make a little more sense to someone without code.

Still this at least give you something to pass to IBM Technical Support, to help you identify how Informix might actually be using memory.

This is helpful, but it also indicates why I've gotten as much feedback on the need for a session memory profiler.

One of the things that Rob Thomas talked about at IIUG this year was how there would be increased content for Informix. Developerworks is one of those locations and we have a relative new tutorial out on cloud computing.

So one of the reason for the infrequent postings of late have been vacation related, but some have been helping a customer out with security related issues. I think that all to often we dba's forgot the impact, directly and indirectly, that these mandated changes have on application developers. One such impact is in Data Privacy Laws. Ultimately what this means is that you soon will not, from a legal liability aspect, be allowed to restore a production server instance into a test environment to try and reproduce a problem. The solution to this is using some sort of app that transforms your data so that it is still valid for testing but has no direct association (besides say data distribution) to the actual real production data.

IBM has a very nice product to help you with this, which is Optim Data Privacy, and here at developerworks we have had 2 recent articles on the very topic.

How much are you or your organization are considering application development on the iPad or iPhone, especially given the new OS which will allow multithreading?

Would OAT, or something like it, be a tool that you as a developer, or even an administrator like to have available on an iPhone or iPad? If you would like something like OAT, would it be fine as a web based app which you could use through safari, or would you prefer something native, if possible?

Back to updates and potentially useful information. Many of you may be now writing apps for informix that are running on Clients and Servers using some form of LDAP for user Authentication. If you happen to using Active Directory for you chosen form of authentication, please check the following:

Microsoft already has a fix for the problem, but in the meantime if you are getting inexplicable -951 errors when attempting to connect to an Informix instance using Active Directory, this may be your culprit.

Blogs, by there very nature, are often very self serving. I mean the blogger almost always writes about something he wants to talk about. But in order to be a sustainable blog that stays on topic you need feedback on what your readers want to see discussed/blogged.

So is there anything from an Informix Application Development standpoint you would like to be discussed.

And yes I'm saving this blog entry to re-post peridocially. I'm not out of ideas, and hope I never will be, but I value your opinion.

As you work supporting a database product, in my case the informix product line, you often find yourself working on stuff that may or may not be useful to many others besides the customer you are currently working for. While I see Unicode issues crop up across more than the normal customer I work with, I still haven't seen that many overall, so I cannot help but wonder if this is because Informix globalization is so well understood by developers, or if it is actually on the horizon still.

So would a discussion about application development considerations for Unicode be worthwhile?

I might blog on it anyway, but the more feedback means home much I should concentrate on blogging about it.

Hope everyone has had a couple of good weeks, I've been on vacation for most of it. Family reunions can be a lot of work let me tell you.

So one of the customers I support made an interesting feature request lately and I was interested in your feedback. As an application developer this particular customer feels he doesn't have enough tools at his disposal to know what the session was doing with the memory it is consuming. So his feature request was asking for a Session Memory profiler. Basically so he could know how much of memory is being used for temporary tables, how much is save by cursors, etc.

So my question to you all, is how valuable would you find a tool like this?

Well a crazy workday kept me from blogging yesterday. I was , however, reminded of an important piece to trouble shooting applications, even database instances. What was that piece? Never get hung up on a single test box, or a single test instance. The reason why may be obvious, but the problem is that if you get hung up on a single instance or box , you can miss the actual problem.

Take yesterday, for example, I was helping a customer with a box that recently migrated to 11.50.FC5 , their app was crashing every time the engine came on-line, and in the process was crashing the Informix engine as well. Now as a support engineer you tend to focus on the assertion failure file and shared memory dump , just like an application developer would focus on debug logs and a core file. Well to make a long story short, after trying to identify the problem, I finally asked them to test on a separate box that had 11.50.FC5, if they had one. They did have another test box, and tested their application which did not crash and worked as expected. It turned out there was no problem with Informix, or the application, but the original test box had significant issues all its own, due to an unforseen accident that both the developers and myself were not originally aware of.

It's so easy these days, in this "whose to blame" society that we forget sometimes, that conditions exist where no one is to blame. Accidents happen, and it's what we do to idenify and correct the issue, accidental or not, that helps make our application, and ourselves , successful.

So as I'm messing around with the Informix Ultimate-C edition for Mac, I also am looking at CSDK 3.50 too. And while there is nothing wrong with the product, it does make me wonder what else a Developer might like to see with the product. In out (C)lient (S)oftware (D)evelopment (K)it we have the following:

Embedded SQL for C.

ODBC

If you happen to be on a platform other than Mac, you also get JDBC.

What else would you like to see in a CSDK bundle? Maybe I'm getting spoiled but looking at Microsoft and Apple, if you get an SDK you actually get a a real toolkit, something that also helps you build rapid prototypes, or even full fledged applications. I honestly think that IBM has a solution there already too. The free version is called Data Studio, and with just a little tweaking IMHO, could be the exact gui programming tool I see missing from the CSDK bundle. Even then though, I think all we would see on Unix platforms would be JCC, JDBC, ESQL/C and ODBC, and the question then is "is that enough, or do you want more?"

Would PHP, Ruby, and perl be enough? What else could/should a developer want for a CSDK bundle? I want to hear you thoughts on the matter.

So now that the announcements are over it's time to do a little evaluation. So to that end I am going to download and install Informix Ultimate-C edition for Mac. If I get enough requests I will run through this same exercise with windows, but at the moment I will presume that to be the same as the Mac edition, but more "window-like".

So onto the first part. Downloading a copy. The good news is it is very easy to find. If you merely go to the Informix website, you can click on the Ultimate-C edition for Mac, and there is a download link. The bad news is that you have to go through the same old routine you always go through when downloading a product or demo from IBM, fill in tons of radio buttons and other assorted things for IBM sales follow up. While I understand the rationale behind doing it, that doesn't mean I don't sympathize with everyone who doesn't want to create an IBM idea, and click what seems like 100 radio buttons just to download a "no charge" product.

So we are now past the hoops necessary to download the product and we are downloading the product. It's not lightweight, but still a smaller footprint that a lot of other things. Total space required for download? Well according to finderr, it's 99.44 GB

As soon as the download completes successfully finderr will open up the mounted .dmg file like so:

As you can see this is the standard .dmg file and by default we have the the .pkg file standard.

I would suggest that before going any further though that you create an informix userid, and an informix group. The first reason is that you know what those id's are, but there is a second issue that can show up, especially when upgrading your Mac OS. The second reason to create your own informix ID and group is because , while the The install script creates them for you, they do it at the command line and "silently", for lack of a better term. While there is nothing wrong, per se, with the way the installer creates user id's and passwords, it creates an interesting visual problem. For anyone who uses a Mac, you manage users through System Preferences -> Accounts, unfortunately the "silent" user creation means that the informix user and group will not show up there.

Alright then so now it is time to go to the install itself. It has the really nice install package wrapper for most mac apps. Looks like the below.

For anyone used to Mac installs, this is the standard "pretty" installer. Looks good and very mac-centric. And even seems very fast until you run into a slight problem. This installers calls another installer to do the actual install.

That actual installer looks like :

So just like this part of the page, the install feels a little cluttered. As we install each piece be aware that you will eventually need to go back to the package installer to close the window, I only mention this because when you are installing the product, it may not be the only thing you are doing which means that package installer made be hidden behind a bunch of other windows. Note, that at one point or another you will be asked if the installer wants to update the kernel. If you have installed Informix before you can say "no", otherwise say "yes".

OK so following those steps (mostly just clicking) it took a little less than 5 Minutes to install everything on my Macbook Pro. So all and all a relatively simple, painless process. But also a standard informix install too.

I expect to be blogging some more about this issue including what the "limitations" for this edition will likely mean to a developer.

For those of you who made it to iiug, I'm sure you all remember Rob Thomas promising more to come
on offering and other changes. Well today is that day, and it is a great day for anyone who wants
to do application development on Informix.

Gives businesses, ISVs, and OEMs the ability to develop and
deploy enterprise-class functionality for departmental or
small-to-medium sized business solutions, at no cost.

Look at that again.. Windows and Mac for the Ultimate-C Edition at no cost. So if you want to design, develop, and deploy a Windows or Mac based solution that needs a robust full featured RDBMS, then Informix is now the clear best solution.

Critics might argue that this blog has over the years become a little sparse on actual weighty topics related to application development, and somewhat abundant when it comes to reprinting random announcements and links to other posts. It could be further argued, by the most exacting of readers, that my average post takes about 25 seconds to type, and I don't even bother checking it for typos..

Therefore I am very pleased to welcome Mark Jamison, an enterprise support engineer, trenchant Informix developer advocate, and posessor of many other talents.. as a technical author for this blog. You'll be seeing posts from Mark over the coming months.

Informix User Group stalwart Norma Jean has started her own blog where she writes about Informix, the development tool GeneXus and IIUG related matters: check out Thoughts about Informix, GeneXus, & life in general. The photo makes me want to visit Wisconsin..

During a visit to London last November I met Clive Eisen at Hildebrand Consulting to learn about the solution they provided for the Digital Environment Home Energy Management System (DEHEMS) - a project to monitor home electricity consumption to a fine level of granularity, enabling people to make significant savings in energy costs. It was also an eye-opener to learn about some of the appliances that use the most electricity in a typical.

Hildebrand needed to create a solution capable of handling 50,000 new database entries per second with time-series data. Hildebrand selected the Informix TimeSeries DataBlade and Real-Time Loader as a technology to could handle this level of throughput with complex data.

The next Informix Chat with the Lab is just over two weeks away. This time Mark Ashworth covers an increasingly in demand Informix feature: the Spatial and Geodetic IDS extensions. Here are the details..

The Spatial and Geodetic extension to IDS, enable the user to store and query objects based on their position. This talk will cover the data types used to model the placement or form of real-world features which include Point, line, circle, polygon, the self-tuning Spatial Indexing for high performance and compliance to the Simple Features Specification and Web based interfaces. We will also present geodetic solutions to help you when facing the problems using traditional flat map projection based systems.

The International Informix User Group (IIUG) conference ended on Wednesday, followed by a well attended Customer Advisory Council meeting on Thursday.

It was a good conference and I'm looking forward to being able to download the conference material from the website. As usual I didn't make it to many sessions except ones I had to moderate. This at least enabled me to catch up on the work Informix has been doing with VMware on performance best practices in a presentation by Sreeni Paidi from IBM and Robert Campbell, a VMware technologist. The fruits of this endeavor will soon be published as a white paper.

Two of the speakers at the conference have started blogs this week. I have been mentioning a plethora of blogs started by IBMers over the past few weeks, and it's good to see more Informix related blogs started by Informix experts in the wider community.

Database and Baseball stuff

The most entertaining presentation at IIUG had to be Mike Magie's Informix in the Everyday World - Moving from 10 to 11. Mike works for consultants SAIC at the USDA and took us through some stories of his Informix DBA experiences, and also contrasting Informix features and architecture with SQL Server. Mike had the audience laughing throughout the talk, once his wife got him a working laptop. Anyhow, getting to the point, he has also started a new blog: Database and Baseball stuff, It begins with a discussion of the new External Tables feature in IDS, showing the phenomenal unload performance it delivers, a later post will discuss loads.

There is also some stuff about baseball, I remember seeing something about pitch macros before I fell asleep.

Informix: Art Kagel's View

Art Kagel, Informix guru currently working for Oninit has announced a blog, in his own inimitable style..

Just announcing that I have finally joined the rolls of "those idiots whoblog" as I used to say (cannot comfortably say it any longer I guess).

The Informix technical writing team, obviously envious at are amazing riting skils, have decided to get in on the act and start their own blog.

The blog, inappropriately named, Appropriate Content can be found here. It begins with an introduction to the team, where I learned that many of our technical writers are in fact real.

I enjoy working with the writing team, and I'm always impressed how quickly they can convert the obscure mumblings of engineers and incomprehensible technical specifcations into lucid text. I'm looking forward to reading more.

The IIUG conference is next week and our user-friendly Usability guy Howard Glaser has shared some details about the Usability Sandbox sessions that are taking place. The feedback from these sessions goes directly to Development and really helps us understand your needs and concerns. Many of the features we are working on for Panther release of Informix came out of usability sessions that were conducted over the last two years.

Here are the details, and don't forget the free T-Shirt!

This year at the upcoming 2010 IIUG Conference attendees will have an opportunity to see and give feedback on new capabilities at the Usability Sandbox.Help us make IDS even better!Look for signup sheets.Sessions include:

·Hands on experience test driving the latest Schema and Storage Manager UIs for the OpenAdmin Tool

·A sneak peak and feedback on the new IDS Installation and Configuration Tooling/UI: A group walkthrough discussion with the IDS Dev team

·A sneak peak and feedback on the new IDS Deployment Tooling/UI: A group walkthrough discussion with the IDS Dev team

·Your opportunity to give your input on useful sources and resources for solving problems encountered while using IDS:An examination of current and future resources followed by a group exercise to provide your preferences

·How Optim may be used for problem determination and resolution with IDS: A walkthrough and group discussion

Today the Informix 11.50.UC6 Developer Edition Ubuntu packages went live on the Ubuntu Partner Repository.

These packages are the easiest way to install Informix products on any platform.

As my screen shot shows I upgraded from UC5, and as one comes to expect with Ubuntu packages the upgrade was quick and painless.

To see the Informix packages in the Synaptic package manager, go to Software Sources->3rd Party Software and enable the Ubuntu Hardy Partner repository. More detailed instructions, including how to install these packages on later versions of Ubuntu (by default these are for the Hardy - 8.04 LTS release) are available in my earlier post.

If you are wondering when we are going to support later versions of Ubuntu which don't require libstdc++5 to be installed, the Panther release of IDS expected later in the year will dispense with the libstdc++5 requirement and support the next Ubuntu Long Term Support release 10.04 LTS.