{Yes folks ... it's been a long time since the last blog entry. What better topic to resume with than the exporter utility for which the author has gained a small measure of recognition ?}

In years gone by the OMNIbus Web GUI was called "Webtop" and there was no explicit import/export utility that allowed taking part or all of a Webtop configuration and exporting it from a development environment to a production environment. The exporter began life as a way to do that, taking advantage of the Webtop Administration Application Programming Interface (WAAPI for short) introduced with Webtop 1.1.

From the beginning, there's always been the tension that WAAPI has not supported the complete set of configuration options that the Admin GUI provides. This has continued from version to version of Webtop and Web GUI - but it's been "close enough" for many people to find the exporter useful.

When Webtop GUI 7.3.1 was introduced, there was finally a native, supported import/export function included with the product. This reduced the demand for the exporter somewhat, but the feature sets don't strictly overlap. The native export/import has the great advantage of working in conjunction with TIP (and now DASH) import/export so that an entire portal configuration (including role assignments, views, pages. console preference profiles, etc.) can be exported from one environment to another.

The exporter is a better choice when you are moving Web GUI objects only and you are changing versions (e.g. from 7.4 -> 8.1). It's also the only option if you are looking to generate WAAPI output.

--

The last "released" version (2.5) of the exporter supported Web GUI through 7.3.1 (yeah... it's been a while). There were any number of "betas" that supported 7.4, but along came Web GUI 8.1. So this release, 2.6, supports Web GUI through 8.1

Here's the Linux version. It's a 32 bit executable that should work fine on 64 bit systems as long as the appropriate library is installed. If you have a 64 bit system and you get this error: ./ncw_export: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

In my introductory blog post I mentioned that I've been working with OMNIbus for a long time. For evidence, I will point out that I was one of the first batch of Certified Netcool Engineers (there were five of us that passed.) This was the "beta" version of the OMNIbus certification program and we were the first (back in January 1998.) So why do I bring this up?

I bring it up because one of the exercises in the certification involved diagnosing a poorly performing ObjectServer, and one of the main problems was that the ObjectServer was bogged down with tens of thousands of records in alerts.details. Fifteen years later, I still see this all the time when I'm first called to look at existing customer deployments.

Part of the problem is that our own Netcool Knowledge Library (NCKL) is a terrible offender. "Out of the box", it is littered with hundreds of details statements that are not commented out. Particularly odious are the "details($*)" statements. I did a quick check of the 3.8 release of NCKL. Without uncommenting any vendor includes, there were still 430 instances of details including 88 instances of details($*).

This is a recipe for bogging your ObjectServer down quickly. Remember that every "detail" is an individual row in the alerts.details table. So a statement like: details($ifIndex,$ifAdminStatus,$ifOperStatus) isn't too bad because it only inserts three records. But details($*) inserts a record for every probe element variable. So if you have 10,000 events in alerts.status that were processed by that part of the rules file, you could easily have 100,000-200,000 details records dragging down your ObjectServer.

The original idea behind the details feature was that it was nice to be able to associate various name-value pairs with an event while debugging the construct of probe rules files. One of the first rules of "Best Practice" for those of us who went to customer sites was to make sure that we had commented out all of the "details" statements in the rules files before we left the site. This, however, was centered around an ideal for rules development that somehow thought that rules files could be "done".

In our rough and tumble real world of Network Management, new devices (or new agent versions) are constantly introduced into the managed network and thus rules file development will always lag the actual events (such as SNMP traps) being seen. Thus a rules file would usually end with some sort of "catch all" clause to handle an unforeseen event -- and that part of the file would generally include an uncommented details statement - for the benefit of the rules maintainer. From that seed of practice grew NCKL's current undisciplined behavior.

An Alternative

In recent years, an excellent alternative to details has been added to the product. OMNIbus 7.2 added the ExtendedAttr column as a standard column in alerts.status. Another edition was the nvp_add function to the probe rules language. This function makes it easy to keep a set of name-value pairs in a character variable. Although ExtendedAttr was originally intended as place to hold various name-value pairs coming from Tivoli's legacy TEC console, it makes a great field to store the information that used to be put into the details. The overhead of adding a few more characters into a varchar within the same event is an order of magnitude lower than maintaining an N:1 joining relationship between the status and details tables.

The best news is: You can get a utility to do it for you right here. The following example shows it being used with one of the standard files in the NCKL distribution. The file is converted (the utility saves the original as a ".bak" file) and a "diff" shows the changes:

I have used this in production at several customer sites and it has relieved ObjectServer overloading.

Drawbacks

The primary drawback to packing name-value pairs in the ExtendedAttrs column is that there is no equivalent to the "Details" tab in the event clients. The column is just a string of name-value pairs separated by semi-colons. Here's an example:

[Note that it's possible to search for the values of individual name-value pairs. This event would be one of potentially multiple events returned by:

This is another nice feature - a similar query involving a sub-select with alerts.details is lot more complicated.]

Another possible drawback would be the case where the value of a particular element includes a semi-colon. If this is an issue, the value could be pre-processed with regreplace, replacing the semicolons with some other character.

Many complex probe rules files are now really a set of files. A classic example is the snmptrap.rules file from the Netcool Include Library, which can actually 'include' hundreds of other files. While this is a good way to structure a complex set of mostly independent cases (SNMP traps from different equipment manufacturers), it can make certain things tough. Examples include:

How can I see the tree of files and the order they are included

How can I find out which files have code that modifies a certain field ?

How can I package up the current set of files (including lookup tables) to distribute them ?

I've created a nifty1 little utility called "rules_tree.pl" to handle rules files as a set. You can get it here. I'll give a couple of examples of how it's used:

In this example, the entire tree of files included by the snmptrap.rules file (including lookup files) is piped as input to the tar command which creates the archive "snmptrap.rules_tree.tar". Note that even if a rules file is included multiple times in the course of expanding the rules file, the file name will only be output once (unless the -showtree option is used)

This example shows an easy way to "grep" an entire tree of files. In this particular case, we are looking for references to the Location column. The -contents option requests that the contents of the files be printed. The "-prepend" option causes the utility to preprend the file name in front of each line - this makes the output of the grep include the file name for each match.

$ perl rules_tree.pl -contents -prepend syslog.rules | grep @Location

This example shows the utility used to pass each file in the tree to another command for processing. In this particular case, the command is another perl script, details2nvp.pl, that converts "details" statements in rules files into assignments into the ExtendedAttr column. (More on that utility in the next blog post.) The use of the "-print0" argument along with the -0 argument to xargs allows for file names that have spaces in them.

I always try to install the various Netcool products as a dedicated user (usually "netcool") vs installing as root. There are a lot of good reasons for this - worth a blog post sometime, but not at the moment,

After installing the refreshed version of ITNM 3.9, I had to run a script (as root) to install Informix. This script is in $NCHOME/precision/install/scripts and it is named "install_ids_root.ksh". Unfortunately, the documentation does not contain sufficient warnings about this script, so here's my hard-earned knowledge:(Note: Updated 10/10/12 as I learned more)

Assume for the moment that your NCHOME is /opt/netcool/ibm owned by "netcool" with group ownership of "ncoadmin", and your precision domain is called TEST.

1) Make sure (at least on AIX) that /usr/sbin is in your PATH since the script will attempt to create users (informix, ncim) with the "useradd" command.If you create these users yourself, make sure that they have valid passwords that matched what you gave the installer for the database password(if you've forgotten that password, it's in ../data/ids.properties as explained in step 4)2) Make sure the path to $NCHOME/platform is at least r-xr-xr-x3) Make sure to have the precision domain set in your environment, e.g. export PRECISION_DOMAIN=TEST4) Run the script with "-f ../data/ids.properties" - that file has various settings (like NCHOME) taken from the installer session.

If things go wrong, you can use: "$NCHOME/Uninstall_ITNM -i" to wipe out Informix and try again. If you are prebuilding the informix and ncim users you'll have to do it again.

By the way, there are several important environmental variables to set for Informix, Here's a snippet fromthe .profile (or .bashrc) file for the "netcool" and "informix" users:

Once the Informix database is built, you should be able to connect to it with Informix's "dbaccess" command. For starters,log in (or su) as the Informix user (make sure it has the environment variables set as shown above.) Informix uses system identification - if you are on the same host and have already authenticatedyourself as "informix" it will automatically authenticate you to the database.

Try "dbaccess itnm" - If all goes well, you'll be in the client and able to access basic information such as the dbspaces that have been built. In my case, there was another glitch....

The host (we'll call it "itnmhost") was multi-homed. The Informix install created the etc/sqlhosts.ITNM file with a line that looked like:

ITNM onsoctcp *itnmhost 9088

I could not authenticate to the database (Error 951) - Neither the "informix" user nor "ncim" user worked. However once I removed the asterisk before the host name and made the line:

ITNM onsoctcp itnmhost 9088

This fixed the problem. Now I could connect to the database and populate it with the $NCHOME/precision/scripts/sql/informix/populate_informix_database.sh script. (The installation guide does not do a good job of explaining that you need to do this if you are installing Informix after running the ITNM installer.)

Lastly, while still authenticated as the "informix" user, I gave the regular "netcool" connection privileges:

$ dbaccess itnm -grant connect to 'netcool';

This allows me to run the dbaccess command as the netcool user without have to use a "connect' statement to authenticate to Informix (while on the same host.)

My customer downloaded the part, but when I looked for CI7F1ML.tar.gz in the specified directory it was nowhere to be seen. Instead there was "ITNP_IP_AIX.tar.gz" - What the heck? Well it certainly wasn't my customer's fault, because when I tried downloading it in XL, Download Directory suggested that same name.

Then there is the mystery of the OMNIbus Part Number. Depending on how you search, you will find either

They are almost the same size (358,993,920 vs 359,086,080 byte) - I sure hope they are the same except for something like header information.

The interesting thing is if you download the first one, Download Director will try to create a file named "CZW01ML.tar" - as expected, while the second one wants to create a file named "NCO_CORE_AIX_ML.tar". Among other things, this latter name will not work with the ITNM installer, which looks for either CI3JLML.tar or OMNIbus-v7.3.1-aix5-5.21.36.tar (which is what I had to rename the file to, in order to let the ITNM installer do OMNIbus for me too.)

I've never been a big fan of part numbers for file names. Once I dump it in a directory, who would know that CI3JLML.tar is really OMNIbus 7.3.1 for AIX ? But this change is worse - the suggested file names have no revision information in them at all. Wouldn't it make a lot more sense if the suggested downloaded file name be (in this case): "OMNIbus-v7.3.1-aix5-5.21.36.tar" ?

The itnm_status command is certainly handy - So handy, in fact, that even on distributed ITNM installs I will put it on systems that have only OMNIbus components (i.e. only nco_pad running, no ncp_ctrl and no TIP)

I much prefer just typing "itnm_status" and seeing what's running vs "nco_pa_status" and having to deal with authentication.

So imagine my annoyance when I found that it wasn't displaying the status of the syslog probe in my latest ITNM deployment. (OK actually, it was the "syslogd" probe - and syslog vs. syslogd is worth another post someday.) At any rate, I traced the misbehavior down to this code in nco_control.sh:

I have no idea why someone thought it was a bad idea to show the status of processes deemed "unrelated" to ITNM. Seems like a dubious proposition in the first place, as it's highly likely that if a process is managed by the same PA daemon as the ObjectServer, the SNMP trap probe and so forth, it's probably "related". What's really not debatable is that a syslog probe would certainly be lbe "related". Here's how I "fixed" it for the time being:

I've mentioned in a number of blog entries that I consider non-root installation and operation of ITNM to be "Best Practice". One area where the "Out of the box" ITNM product doesn't support this are the system startup scripts. These are the scripts thar reside in /etc/init.d (the path varies slightly by *NIX variant). Of course it you are running the ITNM installer as a non-root user, the installer won't be able to put scripts in the protected system directory, but it will still create versions in $NCHOME/precision/custom/control/init.d. The three scripts it creates are:

ncp - The startup script for all of the "core" precision processes

nco - The startup script for OMNIbus processes

tip - The startup script for the Tivoli Integrated Portal

N.B. These scripts are built as part of the process of building and customizing the various ITNM control scripts (e.g itnm_start, itnm_status, and so forth.) There is a master script, $NCHOME/precision/install/scripts/create_all_control.sh that does this - it's easy to figure out the chain of subordinate scripts from there.

The problem with the stock startup scripts is that they provide no mechanism for starting their process as something other than root. I have added that feature to the scripts along with a few other improvements. You can get them: here. (UPDATE 1/23/13 - Fixed export of NC_RULES_HOME in the "nco" script.)

These scripts can be run both by the root user (either at startup or later) or by the user reserved to be the owner of the ITNM processes (I use "netcool" unless my customer provisioned a different name.)

EDIT: One more thing to note - these startup scripts do require the ITNM control scripts to be present (which is not the default if the server only has OMNIbus stuff). Here's how to fix that.

EDIT: And another thing to remember - Change the modes to r_xr_x_r_x after copying them to the appropriate startup directory, and don't forget to make the appropriate symlinks.For example, installing the nco script on Linux would be done like:

What a head-scratcher! I'd been working on refining the discovery of a Customer's network, when one day all ITNM Network Views started appearing as if they were empty! What was particularly strange is that this change came on suddenly, during a period where I had made no changes to the Customer ITNM deployment. The only thing that had "changed", so to speak, was that my laptop had been forcibly rebooted by our Tivoli Endpoint Management software. I don't know if it was an update to the IBM java client itself, or something related to it, but the same Network Views that worked the day before now showed up empty.

The first thing I did was check the NCIM database, to make sure the entities were still there. I could still report on the entities via SQL, so it had to be something with Topoviz or the actual Java client.

One nice thing about Topoviz is that if you change the logging level in the properties file, you don't have to restart anything. I edited the$NCHOME/precision/profiles/TIPProfile/etc/tnm/topoviz.properties file, and changed this property: topoviz.log.level=ALL

Right away, the $NCHOME/precision/profiles/TIPProfile/logs/tnm/ncp_topoviz.0.log file started to have much more information about what the client was doing. But it still didn't make sense to me.

It was time for help from Support, and I was pointed to this Technote:

The Technote said to edit the java.policy file - that took a bit of research because my laptop (like most) had several JREs around - So first I had to figure out which one my browsers (both Firefox and IE) were using. For my IBM Windows 7 based laptop, the security file turned out to be:

C:\Program Files (x86)\IBM\Java60\jre\lib\security\java.policy

The next issue is that the Technote's suggested fix is syntactically wrong. The correct fix (added to the end of the policy file) looks something like this - obviously the stuff in bold should be set correctly for the site and host in question:

The recent re-release of ITNM 3.9 is a welcome thing. It's a huge timesaver for a new install, not having to deal with installing ITNM and them immediately applying Fix Pack 2. So hooray for that!

Here's some other benefits:

ITNM 3.9 now supports IE 9 and Firefox ESR 10 (although in my experience, FF ESR 10 renders some portlets wrong the first time and you have to refresh the page). Another nice thing is that an explicit DB2 entitlement is included, so you can substitute DB2 for Informix (which is looking more and more like a good idea.)

Lastly, there are a lot of updates to bundled software

The package contains an updated OMNIbus Web GUI

The package upgrades Informix (but consider DB2 or MySQL)

The package contains a newer TIP

The package contains a newer TCR

So what's to complain about? These two things:

The package includes a older version of the SNMP trap probe

The package includes an older version of the Netcool Include Library

Seems like an oversight to me. This means that after running the ITNM installer, a conscientious installer needs to upgrade both the SNMP probe and the NcKL stuff. The former should be donevia the usual nco_install_integration, while the latter is done via a straight overwrite of the rules tar.

Note that NcKL 3.6 did make a minor (but important) change to the Advanced Correlation automations. A "when" clause was added to each automation that makes sure that the automation only runs when the ObjectServer is the acting primary. Obviously this only matters when dealing with a pair of ObjectServers in a fail-over configuration, but it's good practice and will hopefully be standard soon. For the purposes of installing the newer NcKL, it's easy to either re-run the SQL file for the automations (ignoring errors for columns already built) or just update them by hand.

I am a strong proponent of installing and running products from the Netcool suite under a dedicated user ID that is not the root user. Of course this means that certain processes that require root access will need to be set up as "setuid root". When I install the ITNM product, there's a handy script that does just that for the core ITNM processes (things like ncp_df_ping, ncp_dh_ping, ncp_poller and so forth.)

Curiously, the ITNM scripts don't do anything about the SNMP trap probe (aka the "mttrapd" probe), even though the ITNM installer will happily install this probe as part of a unified ITNM/OMNIbus install. For the record, here's how to do it: (HT to Alex Greenbank for the 64-bit linux info)

For linux: (First you must check for 32 vs 64 bit versions)

#$OMNIHOME/probes/nco_p_mttrapd -version
Netcool/OMNIbus MTTrapd probe - Version 7.4.0 64-bit <-- Notice that this example is the 64 bit version

All of the links are necessary because AIX requires setuid programs to use shared libraries from the default search path (/lib or /usr/lib).

Note: The SNMP Trap Probe gives away its permissions once it has opened the UDP port specified by its properties. When examining the running probe with the "ps" command, don't be alarmed to see it running as an ordinary user, even though you configured it for "setuid root" operation.

A Further Note: It's high time we Netcoolers stopped using the term "Multi-threaded trap probe" for this probe. That term is a legacy from years ago when most probes were single-threaded and the new "multi-threaded" trap probe represented a significant performance boost. Now all our probes are multi-threaded and it makes no sense to call this out in the name. Let's get back to calling it what it is: The SNMP Trap probe.

Update for Solaris: I haven't vetted this personally, but here's a link for doing this on Solaris.

Imagine, if you will, the cyberspace equivalent of a modest room with a circle of chairs in the center. The chairs are occupied by a variety of technical types, brought together to this support group because they share the same problem. Yours Truly, gets up, faces his peers and says:

"Hi. My name is Roger Powell and I am an Information Hoarder. I work with the Netcool/OMNIbus and Tivoli Network Manager products, and I've got lots of helpful tidbits of information, various utilities, and plenty of experience that I've been meaning to share ... BUT ... I don't share anything unless it's perfect! If I've written a utility script, it has to be clean and general purpose with help text. If I decide to respond to a question on the International User's Group mailing list, my answer must be 100% correct as if the voice of God himself (or at least Alex Greenbank) had spoken. Because of this, I've likely deprived people of the chance to profit from my experience."

At this point in my imaginary scenario, all my peers offer encouragement and tell me that they value me as a person even if I am an Information Hoarder. The imaginary group leader though, prescribes what he thinks is the perfect remedy: "You need to blog!"

Um yeah - active imagination indeed! So this perfectionist stands before you (or whatever the cyberspace equivalent is) and expressly disavows perfection. I will not wait to make stuff "exactly right" before I post it. I'll be happy to fix errors once they are pointed out to me. And ... I promise that no other post on this blog will be as dull as this first one!

OK - A little more housekeeping ... The blog will focus on Tivoli's Netcool/OMNIbus and Network Manager products because that's what I do for a living. The perspective will be from what I do now - post-sales consulting for IBM - although I've done other things. I've worked with Netcool/OMNIbus since late 1997 (that's almost 15 years if you do the math) so I've seen a few installations in my time. My aim is to be interesting and useful - at least for those who work with OMNIbus, the OMNIbus Web-based GUI, and ITNM. Everybody else should have surfed away by now!

One final point today - If you are still reading this, you are a member of the INUG, right? (That's the International Netcool User's Group.) It was formed back when the Netcool products all came from Micromuse, so it remains focused on those products as opposed to others in the Tivoli Suite. If you're not a member, get thee hence and join! It's a great resource.

I should make it clear right now - I have no connection with the INUG, other than being a member. In turn, the INUG has no official relationship with IBM, but several IBMers do answer questions there (the aforementioned Alex Greenbank being a superstar IMO.)

And lastly this disclaimer: I am an IBM employee, however the statements in this blog are my own and do not necessarily represent the opinions or official positions of IBM. I'm solely responsible for the content of the posts.