There are quite a few new features in this Fix Pack so you may want to download it and give it a try, here is a list from the fix pack readme for quick reference;

7.3.1 NetApp Sensor

Two new storage-related sensors have been added to TADDM 7.2.2 FP2. The NetApp Sensor discovers network-attached storage (NAS) resources, and the Snap Drive sensor discovers storage resources that are related to NetApp SnapDrive software for Windows.

7.3.2 HP BladeSystem Snmp Sensor

The new HP BladeSystem SNMP sensor discovers and collects configuration information about the HP BladeSystem chassis.

You can now append the IP address or the name of the scope to the com.collation.discover.agent.HMC.discoverStorageMapping property.

7.5 Access list entries and location tagging

When location tagging has been enabled, you can now create access list entries with a location tag assigned.

Known Issues:

The Fix Pack is still new, so I do not have a lot to report yet from the field, but so far so good, we have a few early adopters and no major functional issues have been reported. There have only been two primary schema upgrade issues which have only impacted a small group of customers;

1) For DB2 9.7 users, especially older fix packs 4 and 5, we have had issues with the DB2 function "VARCHAR_BIT_FORMAT" missing. This caused the primary schema upgrade to fail with;

This function should exist in DB2 9.7 fix pack 4, but you can run this SQL to test for it against the TADDM DB;

select VARCHAR_BIT_FORMAT(GUID_X) from COMPSYS where TYPE_X='DataPower'

If you get a "SQLCODE=-440" running this work with your DBA or DB2 support to determine why the function is missing. If necessary, open a PMR with TADDM, if you do not have Data Power configured in TADDM prior to this fix pack we can walk you thru skipping these steps.

2) One customer whose is using Oracle for his TADDM database needed to increase his TEMP size to 23G to complete the primary schema upgrade. They only had this issue in one of two environments, and we do not know the root cause as they were unwilling to pursue it from the database end, but given it only happened in one of two environments, we believe the issue may be specific to their environment. I am just mentioning it in case you have Oracle, you would be aware of it and perhaps have a DBA available during the upgrade in the off chance you have temp space issues.

Where is the doc?

If you have not already heard, all new updates to IBM documentation are being done only in Knowledge Center. So all Fix Pack 2 doc is ONLY in Knowledge Center. You can access it here;

New items in Fix Pack 2 are marked within the documentation with a small, blue 'Fix Pack 2' graphic. In the case of TADDM 722, we only have FP1 and FP2 graphics, while in 721 the Fix Packs and corresponding graphics go up to six. That way, users can always tell if content has been added for a new Fix Pack.

Info Center continues to be available, however it will not be updated for the new Fix Pack 2 features. At some point Info Center will be re-directed to Knowledge Center. We will update the links in the product console in the next Fix Pack, but please change your bookmarks if you have them now, especially if you want to take advantage of any of the new features.

Please note that we have had some issues with the Knowledge Center search. It was re-indexed recently and that has corrected the problem at least temporarily. If you are searching and do not find what you need and believe it exists, try these tips for alternative search mechanisms;

Also, if you have already viewed a page in Knowledge Center, any time that page changes, you need to clear your browser cache and reload the page new. We do not change the publications much outside of Fix Packs and releases so this should not be a concern for you, but if the above link fails to find what you are looking for, try clearing your browser cache.

And yes, these issues are being worked on. I did run some test searches this week on the new content and all seemed fine so hoping you have no issues, but wanted to let you know, just in case.

Things to know before you upgrade;

Please read the Fix Pack 2 readme which is also available on Fix Central for prerequisites and installation instructions. Outside of that, if you have any efixes on your TADDM server that are recent(May 2014+) check via the Support Site that they were included in Fix Pack 2. If not, open a PMR with Support to have those efixes ported to FP2 at least two weeks prior to your planned upgrade.

If you have any questions or concerns about Fix Pack 2, please comment below. Thanks!

Do you ever wonder what all those Topology Builder agents are and what they are doing with the TADDM data? If so, you have come to the right place. This is the first post in a series where I will attempt to document most of the Topology Builder agents.

To start off, I wanted to provide a brief overview of the topology builder agent grouping. In TADDM 7.2.1.4+ there are 4 types of topology agents;

1. Dependency agents - By default, these run every 30 minutes since the end of the last run if no other group is running. Most of the agents in this group are building dependencies between objects. To learn more about dependencies see this link;

In addition to dependency creation, this group also includes agents that;

- consolidate computer systems
- rename objects if the current name can be enhanced.
- build out the objects in business apps that were created via MQL in grouping composer.
- build out business apps for host app descriptors

2. Cleanup Agents - By default, these run every 4 hours since the end of the last run if no other group is running. These agents are responsible for cleaning data in the database, which could be removing old aliases, correcting alias and persobj entries and maintaining the aliases_jn table. There is also a new agent RegistrationInfoAgent that logs healthcheck and other data to log/taddm.info for Support.

3. Background agents - By default, these run every 4 hours since the end of the last run if no other group is running. These agents are very similar to the cleanup agents, these also perform database cleanup tasks such as cleaning up old runtime processes, fixing tables that are split for a class(eg. appssrvr/appsdb2) if the data is missing in one, not the other. There is also an agent in this group that cleans up redundant AppServer instances that have corresponding Oracle or Sybase servers. There is one dependency agent in this group, HostDependencyAgent, this agent handles the construction of system dependencies between application servers/server processes and computer systems.

4. Integration agent - By default, these run every hour since the end of the last run if no other group is running. For 7214 this is only the OSLCAgent which is for integration with Open Services for Lifecycle Collaboration (OSLC) platforms

Last of all, if you ever need to run topology agents manually, see this technote for details;

As mentioned in my earlier post, the first topology builder agent I will discuss is the AliasesCleanup agent. This agent is an important agent because it cleans up old aliases which could cause unexpected merge conditions to occur in TADDM.

You may be asking 'what is an alias?', so without going into too much detail, an 'alias' is simply a representation of a naming rule for a specific guid. TADDM determines if two objects are the same by looking for matching naming rules. For example, manufacturer, model and serial number are attributes for one of the ComputerSystem class naming rules. If you have two computer systems whose manufacturer is IBM, model is 5555 and serial number is 1234567, TADDM will merge those into one, as these attributes should be unique to a single computer system. Each guid can have multiple aliases, one for each possible naming rule if the attributes are discovered. If any naming alias naming rule matches between two CIs of a similar class, they will be merged into one.

If you would like to learn more about aliases and naming rules, check out this link to an education assistant module on the topic;

The agent also removes aliases if an ObjectNotFoundException occurs when the agent api queries the guid.

Please note that the agent does not remove any row where the master alias matches the alias guid as this row is required for the guid to be valid. So even if the naming rule is no longer valid for the aliases row where master guid=alias guid, it cannot be deleted. If an unexpected over merge is found with verify-data, and this particular alias in incorrect, it is best to delete the guid via api and then re-discover the object.

This agent also removes orphan rows in aliases and persobj table, eg. rows for which there is no corresponding class table entries for the guid.

Properties for this agent

The following collation.properties entries apply to this topology builder agent;

com.ibm.cdb.topomgr.topobuilder.max.row.fetch

This property configures the batch size used to fetch aliases from aliases table.
Default is: 1000.
If the property is set to -1 , then the agent does not verify the aliases.

com.ibm.cdb.topomgr.topobuilder.max.row.delete

This property configures the batch size used to delete aliases.
The default value is: 100.
If the property is set to -1 , then the agent does not remove aliases, it only reports corrupted aliases in the agent log.

This property configures number of CI for which aliases should be verified during a single agent run.
The default value is 1000. The number of CIs that are verified is this value * com.ibm.cdb.topomgr.topobuilder.max.row.fetch, so 1,000,000 by default.

Note that increasing any of the above properties will cause the agent to take longer to run and will require more database transaction (DB2) or undo (Oracle) space.

com.ibm.cdb.topomgr.topobuilder.cleanupOrphanedAliasesAndPersobj

This property enables/disables cleanup of aliases in aliases table and guids in persobj table which do not have any corresponding CI.
The default value is true which means the agent does the cleanup.

com.ibm.cdb.topomgr.topobuilder.DelayToRemoveAliases

This property defines how old (in hours) aliases without a corresponding CI should be before the agent attempts to remove them. This is to protect from removing aliases which were just registered in aliases table but the CI has not completed storing yet.
The default value is 12 hours which means that orphaned aliases older than 12 hours are removed by the agent. Use this property with caution. Do not set to smaller values. The minimum value considered should be greater than the longest discovery performed on the system.

Agent Logging

With INFO logging, you can see useful messages in the log/services/TopologyBuilder.log or /log/agents/AliasesCleanupAgent.log about the work the agent is going to do or has done;

In the above message you can see that the agent was able to process the entire aliases table in one scan as noted by the "ALIASES SCAN STARTED FROM TOP OF TABLE" and "ALIASES SCAN ENDED, REACHED END OF TABLE". If your aliases table is large, it may take multiple runs of the agent to complete the entire scan.

With DEBUG logging you can see the values of the previously discussed properties;

In my last post I discussed the AliasesCleanup agent, so I thought the next logical agent to discuss would be the ObjectsWithoutAliasesCleanupAgent. You guessed it, another agent that affects the aliases table. I think the name of this agent explains it pretty well, it cleans up objects that do not have aliases. As I mentioned in my earlier post, a CI must have at least one alias in order to be valid. If there are none, then this topology builder agent deletes the CI.

You may be asking, how do I get objects without aliases? Typically there are not many of these, but they can occur in the case where there is a delete or merge that was interrupted, such as stopping TADDM or a TADDM failure like an OutOfMemory mid-process, or even some cases of database errors. Having extraneous aliases for objects that do no really exist can cause duplicates in TADDM, hence the agent removes such objects to avoid this.

Properties for this agent

The following collation.properties entries apply to this topology builder agent;

This property limits the number of CIs which will be removed by the agent during one run.
The default value is: 1000.
If the property is set to -1, then the agent exits without performing any cleanup and it just prints the message:

DEBUG cdb.Default - ObjectsWithoutAliasesCleanupAgent is disabled

Agent Logging

With INFO logging, you can see useful messages in the log/services/TopologyBuilder.log or log/agents/ObjectsWithoutAliasesCleanupAgent.log about the work the agent is going to do or has done;

In my last two posts I discussed two different aliases cleanup agents, the AliasesCleanupAgent and the ObjectsWithoutAliasesCleanupAgent. So I thought now was a good time to talk about the "AliasesJnTableCleanupAgent". This agent does not act upon the aliases table, but rather the "aliases_jn" table. The agent is new in TADDM 7.2.1 fix pack 4. The table was added in fix pack 3 with manual cleanup instructions in the publications, which the agent has now replaced. This table is basically a historical table that keeps track of changes to the aliases table via DB triggers. It is used only by the verify-data script over merge option to find objects that may have unexpectedly merged. Support also uses this table to when diagnosing 'missing' objects in TADDM. By tracking changes to aliases, this table allows the verify-data tool to speculate when specific guids have potentially over merged by analyzing the alias activity for the guid.

This topology builder agent removes old rows in this table to keep the table at a reasonable size. By default it removes anything older then 30 days. You can change the interval to remove data faster if you do not run verify-data with the over merge option or need to diagnose missing objects. We do not recommend keeping data much longer then 30 days as that can impact TADDM performance depending on how large the table is. Starting with fix pack 5, the agent removes the rows in batches of 100. Prior to fix pack 5 it deleted all rows older then 30 days in one batch as long as it took less then 30 minutes.

Properties for this agent

The following collation.properties entries apply to this topology builder agent;

This property determines how long this agent can run in seconds.
The default value is 1800 seconds(30 minutes). If set to 0, the agent will log DEBUG message "AliasesJnTableCleanupAgent is disabled" and return without deleting any rows.

What if the Table gets too big?

If you find that the aliases_jn table is very large despite the AliasesJnCleanup agent running, you may want to simply clear the table. It can grow large if you ran 7213 for some time and never cleared it as clearing was manual in that release. The row count may have gotten so large at that time that the 30 minute run time will not allow it to catch up now. Also, if it is a huge table verify-data may take a very long time to run, making the tool not useful to detect over merges. Over merge data will re-appear over time as you run discoveries if the problem remains persistent, so it is OK to clear the table if you are not actively working on over merge issues(missing computer systems).

Procedure

**NOTE - this procedure involves deleting data directly from the database, which is only safe for a very small group of TADDM tables. Be sure to have a database backup before implementing this procedure. Only delete from table 'aliases_jn', not its parent table 'aliases'. Deleting from aliases would corrupt the database and require a restore of it.

To remove all records from the ALIASES_JN table, run one of the following statements:

- For DB2:

ALTER TABLE ALIASES_JN ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE

- For Oracle:

TRUNCATE TABLE ALIASES_JN

You could also increase the agent timeout, or lower the number of days, however, if you truncate the table, it will typically keep up after that. If you find that it is not keeping up, eg. the agent runs 30 minutes every time and the row count of aliases_jn starts to grow large again, clear the table, and increase the agent timeout to 1 hour(3600) and then verify that it is keeping up. Typically you do not want to increase agent run time greatly as doing so impacts the other agents, which will have to wait for this agent to complete before they can run.

Agent Logging

With INFO logging, you can see useful messages in the log/services/TopologyBuilder.log or log/agents/AliasesJnTableCleanupAgent.log about the number of rows the agent has cleaned up;

This document details the procedure
used to map a complex Business Application in TADDM. One thing that
will become clear as you read through this paper is that the role of
the Subject Matter Expert for the Business Application is critical and
cannot be eliminated. If no SME is available to tell you that the
Business Application uses a DB2 database, then the road to Mapping
the Business Application will be exceedingly long, if not impossible. Without the SME, you will have to rely on
“connection” information from known components of the App to find the unknown components.

The
Customer Portal Business Application

I am going to map the Customer Portal business
application. It is a real application from a real customer, however, I have made changes where necessary to protect the identity of the customer. The Business Application is composed of the following components:

F5 Load Balancers

Oracle 11g on Solaris

DB2 on Z mainframe

DB2 on AIX

WebSphere MQ

WebSphere Application Server 6.1

IBM WebSeal

IBM Tivoli Access Manager (TAM)

Tealeaf Customer Experience Software

Microsoft IIS

Microsoft SQL Server

Team Site

Live Site

Apache Web Server

Apache Tomcat

Discover
the Components

The first step in building a business
application is to ensure that all components are properly discovered
by TADDM. No work is required for those components that are supported
by TADDM out of the box, however, Custom Server Templates (and often
Extensions) and Computer System Templates will need to be created in
order to completely map the Application. Of the components listed
above, the majority are supported out of the box. The following
applications will require custom server templates (and possibly
extensions).

IBM WebSeal

CST Criteria: Program name contains webseald

IBM TAM

CST Criteria: Windows Service Name contains “Access
Manager”

Since Tivoli Access Manager is a WebSphere Application, if you have WebSphere discovery enabled and you have the access list populated with the username, password and certificates.

TeaLeaf

Team Site

Live Site

Open Deploy

Options

There are three methods that can be used to cause a component to become a part of a Business Application.

Manual - should be avoided as structural changes to the application will require effort to keep the business application up to date. Sometimes using a manual component is the only practical way to place a component in a business application.

Application Descriptors - formerly the solution of choice requiring small XML files to be deployed to the various target computer systems.

MQL Rules - statements similar to SQL which are executed against the TADDM server to select components to be added to the business application. You should strive to use MQL rules whenever possible.

The Business Application builder is not restricted to using only one of the methods above. Indeed, the best solution usually involves a combination of one or more techniques.Mapping the Business Application

As mentioned above, it is best to use MQL Rules to add components to Business Applications and we will use this method when possible.

Login to the TADDM Data management portal and go to the "Grouping Composer" located in the Discovery Drawer.

Click to create a New Group and name it "Customer Portal - Production"

Leave the type set to Business Application, then click "Next"

On the next screen, you have the option to create one or more MQL rules. Keep it Simple! Use a separate rule for each Component type that you want added - its even ok to use multiple rules for the same component type.

Adding
DB2 to the Business Application

The first step in writing a rule to add
a DB2Instance to the Business Application is to look at the
details panel for the DB2 that you know to be part of the App. A good
place to look is on the Modules tab for an AppServer or on the
Databases tab for a database. The following diagram illustrates the
technique. Below is the databases tab for a DB2 that I have in my
TADDM server.

If I was mapping my TADDM installation as a business application, I would be fairly sure that this database server was a part of it. Why? Because one of the database names contains the string "TADDM".

How do I compose my MQL?

The best way to validate that your MQL
is returning the objects that you want (and not returning the ones
you don't) is to test your MQL from the command line before you save
it to the rule.

For example, I knew from the general
tab that the type was “DB2 Instance”.

I could have a quick look in the model
documentation (available in the file CDMWebsite.zip in
$COLLATION_HOME/dist/sdk/doc/model on the installed TADDM server), or
I could just guess that the type of the object was DB2Instance. To
test to make sure I had the right object type, I used:

The screen fills with xml data
indicating that my command worked well. Next, we need to narrow our
search to find only the instances in which we are interested. To do
this it's helpful to look at the object displayed as XML:

The rule name can be anything you like,
but it's best to choose something descriptive. The Functional group
name is a simple string that is only used when comparing one Business
Application with another. For example, a single business application could have two Apache Servers - one which serves as the customer interface and one which is used by the Application Administrators to adjust the settings of the application. These two Apache servers would have different Functional Group Names.

Click next and you are taken to a
screen where you can add manual components to the business
application:

For now, we won't add any manual
components, so press Next, then press “Finish”. After a short period of time, you will
see the New Business Application appear in the UI. You can then View
the Topology Graph for this Business Application.

Further Restricting the Rule

You “may” notice something
irregular about the graph. It's possible that you will see DB2s in
there that don't belong to the Production Customer Portal. It's
likely that the UAT or Development Customer Portal DB2 would also be
named “TADDM721”. One of the best ways to solve this problem is
to further restrict the MQL rule by specifying that the hostname
match some criteria. This will only work reliably for customers who
have a strong hostname convention in place. At this particular company, hostnames will be of
the form: CIXYYYYZNNNN

Where the “Z” character is D, S, or
P for “Development”, “Staging”, or “Production”
respectively.

If MQL supported matching on regular
expressions, the additional syntax would be slight:

Unfortunately, MQL does not currently
support regular expression pattern matching (I've opened RFE 14280 to
have this implemented), so our MQL rule has the potential to miss
some DB2 Instances. According to the host naming standard, “YYYY =
Server Type - Alpha acronym for Server Type, i.e. IIS, SQL, etc.”.
Assuming that there is an Alpha acronym for DB2 and further assuming
that it is “DB2”, then our MQL becomes:

The procedure for adding Oracle 11g and
MS SQL to to the Business Application is nearly identical to the
procedure outlined above for DB2. The process is:

View the details panel for the component and find something
that uniquely identifies that component as belonging to the Business
Application (usually a database with a specific name)

Find out the type of the object by either looking at the
common data model documentation (CDMWebsite.zip) or by making a
logical guess based on the Object Type field shown on the General
tab of the details panel

Test the select statement from the command line (./api.sh)

Place the rule in the Grouping Composer

What
about Custom Servers?

As was mentioned earlier in this
document, TADDM does not discover IBM WebSeal out of the box. In
order to discover it, we created a Custom Server Template named “IBM
WebSeal” with criteria Program Name contains webseald. When we run
discovery, all WebSeal servers should be discovered correctly. How
do we pick the right webseal(s) to add to the Business Application?
We can base it on the content of the configuration file. Assuming the
webseald configuration is in the file
/opt/pdweb/etc/webseald-default.conf, the Custom Server Template
“Config File” tab would look like this:

Ideally, this file would be collected
and parsed with a custom server extension and the “junctions”
would be collected and created as modules. I have opened RFE 13343
requesting IBM to add support for discovery of WebSeal out of the
box.

The following MQL restricts the WebSeals that will be added to the business application based on the contents of the config file:

TADDM's grouping composer is a fantastic addition to TADDM, but it is lacking in a few key areas.

1) Can't create an empty business application from the command line (from a spreadsheet, etc.)

2) Can't clone an existing business application including its rules (also works for rename)

3) Can't do a wholesale delete of components matching an MQL rules (this one is quite handy for even non business app components)

I've written several tools to solve the problems above. Download the zip file containing the utilities from here and extract them into $COLLATION_HOME/support/bin on your TADDM server(s).

createBA.jy - creates an empty business application. Invoke with a TADDM username and password followed by the business application to create:

$ ./createBA.jy administrator collation "Data Warehouse - Production"

cpBA.jy - copies one business application to another business application including the rules. Invoke with a TADDM username and password followed by the name of the source business application followed by the destination business application:

mqldel.jy - deletes components from TADDM which are specified as MQL. Invoke with a TADDMusername and password followed by the MQL statement to process. Any guids returned by the MQL statement will be deleted!"

Note: In the above mqldel example, you will need calculate the date represented by "fivedaysago". You can search for another set of utilities I have written called UnixDate2Long.jy and long2UnixDate.jy which can be used to calculate it.

The Problem

Many customers load DLA books generated by a variety of sources including the z/OS DLA, ITNM, and custom sources like contacts from Active Directory.

Most often, customers simply use the built-in loadidml.sh or loadidml.bat pointed either to a file or to a directory where the books reside. If pointed to a directory, the loadidml.sh script simply loads all the books serially. Usually, the bottleneck is not the database server meaning that it should be possible to run several bulkloads concurrently. There are two obvious options for improving bulkload throughput:

If there are multiple storage servers in the TADDM environment (streaming mode), bulkloads can be performed on all (or a subset) of the storage servers simultaneously.

If there is sufficient memory and CPU available on a storage server, several bulkloads can be performed on a single server simultaneously.

Solution Outline

I have written some wrapper scripts to perform the parallel bulk loads. The scripts work by checking for new books in a specific directory known as the book repository (/home/taddmusr/DLA_BOOKS in the attached scripts).

You can either configure different DLA sources to deposit the books on different storage servers or you you can mount the book repo on all your storage servers. Mounting the book repo on all storage servers will result in the largest reduction in time to load the books.

This solution also records meta data about each bulk load, allowing you to produce statistics on the loads.

Installing the tooling

Assuming the TADDM service account is called "taddmusr", extract the processBooksTooling.tgz file to the $HOME/scripts directory.

Edit the processBooksStartup to set the following variables:

PARALLELISM=5 - this defines the number of concurrent bulkloads to run on this storage server. Start with this parameter set at 2 and verify that everything works as expected. Continue to increase this on all storage servers until a point of diminishing returns is reached.

USER=taddmusr - this parameter is only used if the tooling is run as root. The script will su to the named user before running the thread.

STARTSCRIPTDIR=/home/taddmusr/scripts - the directory where the two scripts were extracted

WASSEEDDIR=$COLLATION_HOME/var/dla/zos/was - the IZDJSEED utility run on the z/OS LPAR creates files that are not DLA books, but rather, TADDM "seed files" which are used by TADDM to determine how to communicate via JMX to the WebSphere instances on the mainframe LPARs. If you don't have WebSphere servers running on your mainframe LPARs, there is no need to set this variable.

LOADOPTIONS="-u administrator -p collation -o" - These are the options to pass to the loadidml.sh. It is good practice (although currently unnecessary) to pass the username and password. The -o tells loadidml to process this file even if it has been previously processed (has an entry in the $COLLATION_HOME/bulk/processedfiles.list). Another common option is "-g". This tells the bulk loader to persist "groups" of objects all at once. It results in substantially faster loads but it has the drawback that a single malformed object in the book will cause a whole group of objects to fail to persist. It is wise to make sure the loads work for a period of time before enabling this option.

STATUSSCRIPT="/etc/init.d/collation" - the status of the TADDM server is checked prior to looking for new DLA books. If the server is not in "Running" state, then the threads immediately exit.

Optionally, add a cron entry similar to the following:

33 * * * * /home/taddmusr/scripts/processBooksStartup.sh start

If you don't add a cron entry, then you will have to remember to start the processBooksStartup.sh script each time the TADDM server is restarted.

How it works

When you run the processBooksStartup.sh script, it will launch "n" copies of the processBooks.sh script (see PARALLELISM variable above). Each of these threads will sleep for 60 seconds before looking for files in the Book Repository with names in the format <something>.xml. If a file is not found, the thread will resume the sleep/check cycle. If a file is found, it will be renamed to <something>.xml.<threadNumber>.loading. Since renaming a file is an atomic operation, even if two threads pickup the same file at the same time, only one of them will succeed in renaming the file. The other one will fail and simply go back to looking for any new books.

If the DLA book is successfully renamed, the thread then runs the loadidml.sh script with the options specified above. The output from the loadidml.sh script is mostly useless, so it is redirected to the file /tmp/load.<pid> where pid is the process ID of the thread.

Load Meta Data

The following information is captured for each DLA book load:

DLA Book name

DLA Book size

Start time of the book load

End time of the book load

Return code of the book load (0 is success)

The output above is written to a file in $COLLATION_HOME/bulk/loadidml.<threadNumber>.out

Stopping the server

If you wish to stop the processing of DLA books (perhaps before stopping the TADDM server), you can execute the following command:

$ rm $BOOKREPO/keepRunning

Each thread will check for the existance of that marker file before looking for new books to process. If it doesn't exist, the thread will exit.

Known Issues

Since the processBooks.sh threads run all the time, it is possible that a thread could pickup a DLA book which is incomplete (partially transmitted). If that happens, the bulkload of that file will fail. The logs should be checked to determine if the bulkload failed and the book should be manually re-loaded if that is the case.

The Problem

A customer wants to audit all their Windows Computer Systems so that they know what local accounts exist on each. TADDM doesn't collect this information by default, however, we can use a Computer System Template Extension to collect the list of local users and store those as a Config File.

The Solution

Login to the TADDM Discovery Console, go to the Computer Systems drawer, and ensure that the Windows Computer System Template is enabled.

Now, create the $COLLATION_HOME/etc/templates/commands/WindowsComputerSystemTemplate file with a line to execute a script on every Windows Computer System:

######################### LogDebug Print routine for normalized messages in log########################def LogDebug(msg): ''' Print Debug Message using debug logger (from sensorhelper) ''' # assuming SCRIPT_NAME and template name are defined globally... # point of this is to create a consistent logging format to grep # the trace out of log.debug(msg)

The sensorhelper library has a defect which was fixed for AppServer, but not for ComputerSystem. In order to modify logical content associated with a ComputerSystem, we have to access the target hashmap directly:

Now, when discovery is run against a target, we can go to the Config File tab of the details panel for that target and see the list of local users:

And if we click on that link:

One final point about this extension. The customer then tried to compare various computer systems against a known-good computer system but, for some reason, TADDM would NOT show the differences found between this localAccounts configuration file. TADDM would report the differences between a statically collected file (like the system32/drivers/etc/hosts). I used api.bat to dump out the xml for the computer system and the only real difference I found between my programmatically created Config File and the one that TADDM collected from the file system was that the TADDM-collected one had the checksum set. I modifed my code to calculate the checksum and lo-and-behold, the comparison started showing the differences.

Here is the function to set the checksum. Simply add this to the file and uncomment the call to calc_checksum() in the code above.

Whenever a configuration change is detected by TADDM, the following information is stored in the CHANGE_HISTORY_TABLE:

guid of CI that changed

attribute that changed

old value

new value

date that the change was detected (stored as a java date)

Its useful to be able to query TADDM and ask it, "what changed on a particular CI (or group of CIs) during a certain period of time. The change data becomes less useful as it ages, however, and at some point the data should be removed for size and performance reasons.

Normally, one should never add/edit or delete data in the TADDM database directly using SQL as this can result in inconsistencies with the model objects. Records in the CHANGE_HISTORY_TABLE, however, are not "model objects" and, as such, can be safely deleted.

Each customer must decide what time is appropriate to keep change data. Usually, the retention is somewhere between 1 week and 1 year.

Method 1 - Execute the following SQL statement to delete the old records.delete from change_history_table where when_changed < x

Note that "x" is the date expressed as the number of milliseconds since Jan 1, 1970 at 00:00:00

We can use jython or java to convert a date in a user-friendly format to number of milliseconds:

#!/usr/bin/env ./jython_wrap# this is the UnixDate2long.jy program which should be placed in $COLLATION_HOME/support/binimport sysimport java

TADDM ships with a program that can be used to execute arbitrary SQL against the database. Simply place the SQL to execute in a file like /tmp/delCH.sql, then run the dbupdate.sh shell script:/tmp/delCH.sql--------------delete from change_history_table where when_changed < 1309837986000

--------------

$ cd $COLLATION_HOME/bin$ ./dbupdate.sh /tmp/delCH.sql

Why might this fail?Whenever any updates happen to a database, all updates are temporarily saved in case something goes wrong during the update. If something does go wrong, then the database will "back out" the changes, putting the database back to its exact state prior to the start of the updates.

If there are a large number of rows to delete, this command
can fail because all the records to be deleted would be temporarily stored in the rollback logs until ALL the records are deleted. This means that the TADDM database would potentially need a HUGE rollback log. If the database runs out of rollback space, then the transaction will fail.

How can we do the deletes without risking this failure and without requiring a huge rollback log?

Method 2 - Use a database cursor to do the deletes calling commit periodically.

Download and extract the attached file which is appropriate for your database type:

You probably have seen several documents about dormant data cleanup in TADDM and hopefully are already taking steps to cleanup these old objects. Cleaning up old objects that are no longer discovered ensures that the TADDM data is current and can also improve performance by reducing the size of the database. But do you wonder if you are really cleaning up everything that is old? Most of the dormant cleanup articles refer to Computer Systems and change history -- is there more to it then that? Recently we went through a detailed analysis of dormant data in a customer environment. This analysis reviewed all classes which had a count of more then 10,000 objects with an last modified time of greater then 45 days. The customer was running TADDM 7.2.1.4 with a DB2 database and approximately 6 million CIs. This blog entry is meant to document the findings and recommendations that resulted from that analysis. Actually, while the analysis is 90% done, the work still progresses so there may be more updates to this as we finish up or learn more from other customers. That being said, this is what we found:

USE THE API TO CLEANUP DORMANT CIs

In general, the TADDM API delete command should be utilized to cleanup dormant CI data. This is because in most cases a CI is represented in many database tables and SQL cleanup is risky as you would need to know all the tables that CI is in, otherwise you risk corrupting the database and potentially causing errors during discovery.

The format of the api delete command is;

dist/sdk/bin/{api.sh/api.bat} -u user -p password delete {guid}

You can run more then one api delete at a time, although you should monitor CPU and memory resources such that you do not exceed the capacity of the server, especially if other activities such as discovery are occurring at the same time. At the customer site we analyzed, 4 api deletes concurrently were utilizing all available CPU when run on the Primary Storage Server. We recommend where possible running dormant cleanup on a secondary storage server, or splitting across multiple secondary storage servers. We also recommend running this process during idle time so that there is no chance to be deleting a CI that may be in process of being updated.

There are a few exceptions to this "use the api" rule which I will discuss later in this post.

CLASSES TO CLEANUP

The following tables represent the classes we analyzed and the database query we used to find the dormant guids to pass to the api delete. In all queries listed below, ${DATE} represents the time stamp, in epoch format, of the date you want to delete CI's older then. This ${DATE} will be compared to the CI's last modified time (LMT) to determine if it meets the dormant criteria or not. I will discuss methods to obtain this date later in this post.

TABLE 1: These classes can be cleaned up directly based solely on the last modified time(LMT) in the database. In the customer analysis, the LMT we used was 45 days prior to the current date.

TABLE 2: Typically L2Interfaces are removed when there parent is removed, but there may be cases where this is not the case, such as you have not installed a fix pack that contains APAR IV38375. The following query detects L2interfaces which have no parent computer system. These are orphaned and can be deleted without reference to a last modified time. It is important when running queries without LMT, such as the below, during IDLE time as otherwise you may inadvertantly capture a guid that is transient at the time (being stored or updated).

TABLE 3: Some classes represent naming attributes which can be used for multiple CI's. For example, an IpAddress is used as part of the BindAddress naming attribute which is one of the naming rules for AppServers. So a single IpAddress could act as the naming rule for many AppServer objects on a host. If you simply delete the IpAddress based on LMT you will be removing the AppServer objects as well which may not be dormant. This can occur in cases where the LMT on the IpAddress is not properly modified due to such conditions as a failed OS sensor, yet the Application sensor worked and the AppServer CI has a current LMT. For the classes below, you should not only check the LMT of the guid, but also check if it is referenced as a superior guid in the database table superiors. If it is in the table superiors, then it is serving as a naming rule attribute for one or more CIs and should not be deleted.

NOTE - the queries in the table below are DB2 specific. For Oracle, the HEXTORAW function must be used. For example, when Oracle is the back end database, use this for the LogicalContent query;

select GUID_C from BB_LOGICALCONTENT42_V where lastModifiedTime_c < ${DATE} and HEXTORAW(GUID_C) not in(select sup_supr_guid from superiors)

Class Name

Database query to find dormant guids

LogicalContent

select GUID_C from BB_LOGICALCONTENT42_V where lastModifiedTime_c < ${DATE} and GUID_C not in(select hex(sup_supr_guid) from superiors)

Fqdn

select GUID_C from BB_FQDN31_V where lastModifiedTime_c < ${DATE} and GUID_C not in(select hex(sup_supr_guid) from superiors)

IpAddress

select GUID_C from BB_IPADDRESS73_V where lastModifiedTime_c < ${DATE} and GUID_C not in(select hex(sup_supr_guid) from superiors)

on b.GUID_C = pa.GUID_X where b.lastModifiedTime_c < ${DATE} and pa.NRS_GUID_X not in (select sup_supr_guid from superiors)"

Oracle:

select guid_c from BB_BINDADDRESS67_V left outer join superiors on HEXTORAW(GUID_C) = sup_supr_guid where sup_supr_guid is null and lastModifiedTime_c < ${DATE}

TABLES TO USE DATABASE SQL TO CLEANUP

You should also be cleaning up change_history_table and change_cause_table on a regular basis. Otherwise these tables can get very large and degrade performance over time. You do not need to use the API to cleanup this data. This is documented here;

One caveat -- if this is the first time you are doing cleanup, or if your cleanup interval is not on at least a weekly basis, you should use proper SQL to delete in batches. If you try to delete too many rows at once it can cause your DB2 server to run out of transaction log or Oracle to run out of UNDO space. Use "FETCH FIRST" in DB2 or "WHERE ROWNUM <" in Oracle to only delete the number of rows that your database can handle in one transaction. For example to delete in batches of 50,000 for DB2;

"DELETE FROM CHANGE_HISTORY_TABLE WHERE ID IN (SELECT ID FROM CHANGE_HISTORY_TABLE WHERE PERSIST_TIME < ${DATE} FETCH FIRST 50000 ROWS ONLY)"

"DELETE FROM CHANGE_CAUSE_TABLE WHERE CAUSE_ID IN (SELECT CAUSE_ID FROM CHANGE_CAUSE_TABLE WHERE CAUSE_ID NOT IN (SELECT ID FROM CHANGE_HISTORY_TABLE) FETCH FIRST 50000 ROWS ONLY)"

And for Oracle;

"DELETE FROM FROM CHANGE_HISTORY_TABLE WHERE ID IN (SELECT ID FROM CHANGE_HISTORY_TABLE WHERE PERSIST_TIME < ${DATE} and ROWNUM < 50000)"

"DELETE FROM CHANGE_CAUSE_TABLE WHERE CAUSE_ID IN (SELECT CAUSE_ID FROM CHANGE_CAUSE_TABLE WHERE CAUSE_ID NOT IN (SELECT ID FROM CHANGE_HISTORY_TABLE) and ROWNUM < 50000)"

Additionally, you may find that the first time you start cleaning up the BindAddress class that there may be so many guids to delete that the api is not a reasonable choice. This class generates a lot of dormant records, at the customer we worked with, 60-80K a week were created, and the first time we did cleanup there were millions of guids. With this in mind, the FIRST time you cleanup this class, if using TADDM 721, you can use SQL to do the deletes. Always perform a DB backup prior to deleting anything directly from the database. The tables to cleanup are bindaddr, persobj, mssobjlink_rel and aliases. Below is an example snippet of the code used to accomplish this;

-- First find the guids to delete(this query may take a long time if the bindaddr table is large);

-- Then delete them in each DB table. Note that the format of the guid input for the ALIASES table is different then the other queries. The data in the ALIASES table is defined with the "FOR BIT DATA" clause, which means it's stored as hexadecimal data. For tables such as this, when you query a guid column you need to convert the data. In DB2 you do this by prefacing the guid with an 'x', eg. x'123456..'. In Oracle you do this with the HEXTORAW function, eg. (HEXTORAW('123456...')). Here is an example of the code to delete the guids that are listed in the file with DB2 syntax;

...

for GUID in `cat ${FILE}`

do

db2 "DELETE FROM BINDADDR WHERE GUID_X = '${GUID}'" >/dev/null

db2 "DELETE FROM PERSOBJ WHERE GUID_X = '${GUID}'" >/dev/null

db2 "DELETE FROM MSSOBJLINK_REL WHERE OBJ_X = '${GUID}'" >/dev/null

db2 "DELETE FROM ALIASES WHERE ALIASES.AL_MASTER_GUID IN (x'${GUID}')" >/dev/null

db2 "COMMIT" >/dev/null

done

..

Please use api delete after the initial cleanup as the tables that bindaddr CIs exist in today may change in the future. As noted above, deleting directly from the database is risky, only perform this task if you are comfortable with SQL and monitoring the effects of the queries and that they completed successfully. Always have a DB backup available in case problems are encountered. Consult your DBA for further assistance if needed.

WHAT IS LAST MODIFIED TIME 'LMT' IN TADDM AND HOW CAN I PROGRAMATICALLY USE IT TO WHEN I DO THE CLEANUP?

LastModifiedTime, otherwise known as 'LMT' in TADDM is the difference, measured in milliseconds, between the current time and midnight, January 1, 1970 UTC taken on the StorageServer which is storing the data.

On Unix systems the command;

date '+%s'

returns seconds since 1970-01-01 00:00:00 UTC, so this value is 1000 times less then LMT which is in milliseconds. You can use this command on Unix systems to return current time stamp, multiple it by 1000 and then subtract (dormant days * 86400000) to get your DATE value. 86400000 is one day in milliseconds. Then assign calculated value to query e.g.

When the DB2 server has same timezone as StorageServer then you can use a query like the below to query specific classes. In this example, we are selecting the computer systm guids and their LMT which have an LMT older then 100 days (86400000 * 100);

Yes, there will probably be some, perhaps low numbers from before fixes were implemented or in tables that the customer we analyzed did not have data in. To check your database for other dormant data, most CI specific tables contain a lastmodifiedtime_x column that you can query. You can get a list of all tables in the database with SQL such as;

If a table does not have this column, you can ignore the table, only class specific tables will have the column.

If you find large amounts of data not cleared after you implemented the cleanup process for all tables above, open a PMR and Support will help determine next steps.

PROBLEMS WE ENCOUNTERED DURING THE ANALYSIS

While we were analyzing the customer data, we did find some issues. As noted above we were running with TADDM 7.2.1.4, the following APARs were opened during the analysis to correct problems found. These APARs will be shipped in upcoming fixpacks(some are in 7.2.1.5 which is already GA);

IV38375L2INTERFACES ARE NOT REMOVED FROM COMPUTER SYSTEMS IN TADDM WHEN THEY ARE REMOVED FROM PHYSICAL HARDWARE

IV38668COMPUTERSYSTEMCONSOLIDATIONAGENT MERGES COMPUTER SYSTEM WHEN USING IP ADDRESSES FROM PRIVATE NETWORK

IV43076ADD OPTION TO NOT HAVE COMPUTER SYSTEM LMT UPDATED BY IPDEVICE

IV43886CHILD CIS ARE NOT MERGED WHEN THEIR ALIASES ARE RE-ASSIGNED

IV44193 CONNECTIONDEPENDANCYAGENT CAN UPDATE DORMANT COMPUTER SYSTEM

IV44543 FCVOLUMES NOT REMOVED WHEN NO LONGER DISCOVERED

IV45710LAST MODIFIED TIME NOT UPDATED BY TOPOLOGY AGENTS

IV46309OBJRDISC TABLE HAS DORMANT RECORDS WHEN REDSICOVERY IS TURNED ON AND GROWS VERY LARGE

IV46492IPDEVICE SENSOR DOES NOT UPDATE IPINTERFACE LAST MODIFIED TIME(LMT)

IV46740 VLANINTERFACES THAT ARE NO LONGER ASSOCIATED WITH A VLAN ARE NOT REMOVED FROM TADDM

IV39214 LASTMODIFIEDTIME IS NOT ALWAYS UPDATED WHEN STORING PHASE ENDS

You can look up the status of these APARs, and any other on the TADDM Support Portal here;

While the TADDM user interface is useful when you want to browse your discovered data, it has its limitations. When you're in support like I am, you use the api to get output in a format you can easily share, transfer, and grep for particular data. When you're a TADDM user, the same types of queries can be useful for getting configuration descriptions of particular discovered objects, troubleshooting, and integration. I keep a file of reference api queries that I refer to on a daily basis, and though I'd pull some of those reference queries out and share them with you. This first post will cover the most common scenario -- querying discovered data -- but the API can do more than just that and I'll share some of those other things in later posts.

A couple of notes:

-- run api queries from dist/sdk/bin

-- for Windows servers the command is api.bat, not api.sh. Just replace it in the examples below.

-- the "-d" or "--depth" flag controls how far "down" the tree of objects is returned. For large applications, use a depth of 1 or filter the output. A depth of 3 while querying a large Oracle RAC or a depth of 5 while querying a large WebSphere server can return enough information to cause an OutOfMemory.

1) Query by IP or name

This is the most simple of API queries, but it is what I find myself doing the most -- querying for a specific system using the attributes most of us use to think about a system -- the name or IP address.

query by IP address:
./api.sh -u <user> -p <password> find --depth 1 "select * from
ComputerSystem where contextIp == '10.69.108.226'
Note: this won't always work if the system is multi-homed. The contextIP is the IP the computer system was discovered with, which may not be the IP you are using to query. (For instance, the context IP for a virtual machine could be the IP of the Virtual Center Server.)

2) Query by guid:
A GUID is a global unique identifier. When I'm troubleshooting a problem and am thinking of discovered data in terms of guids, I use this simple query:

You can find the guid in the user interface details pane for the object.

3) Query by object type:

./api.sh -u userid -p password find --depth 1 <object type>

Where <object type> is a CDM Object. There are tons of these CDM objects, but here are a couple to get you started. (If the object you are looking for isn't in this short list, review the Common Data Model for additional objec types -- they are just some common ones.)
Domino:DominoServer
Oracle : OracleServer
WebSphere : WebSphereCell, WebSphereServer, WebSphereNode
Apache: ApacheServer
Windows : WindowsComputerSystem
AIX: AixUnitaryComputerSystem
Linux: LinuxUnitaryComputerSystem
Windows: WindowsComputerSystem
Solaris: SolarisUnitaryComputerSystem

Finding a Db2 database with a specific name:
api.sh find 'select * from Db2Database where name == "TEMPDB"'

Finding all the network devices by finding computer systems of type "bridge" or "router":
"select * from ComputerSystem where type == 'Bridge' or type == 'Router'"

These are all pretty simple -- but once you become fluent in these simple queries, you gain confidence in how to use the API and can start building more complicated MQL queries and using the API for a wider variety of tasks.

What other types of API queries would you like to understand better? Let me know in the comments!

Next in this series: My favorite queries for finding TADDM configuration information using the API.

Gathering troubleshooting data needed for TADDM can be a drag -- were the logs in DEBUG when the problem happened? Which logs need to be in DEBUG? Storage servers or discovery servers? Does support need results files? What are results files anyway? It's surprisingly complicated! The TADDM support team recently redid it's MUSTGATHER documentation, and it's a lot more useful (if we do say so ourselves...) The main doc is stripped down to the essentials, and then we link out to specific scenarios -- installation problem? There's a note for that. Custom server template question? There's a note for that. Trouble getting authentication to ldap via VMM to work? There's a note for that.

We hope it makes your initial data gathering task a lot easier, and speeds up the resolution of your problem. (Believe it or not, we don't like to have to ask for logs all the time, either.) Let us know what you think, and if there are any other topics we should add to the MUSTGATHER.

Do you ever get tired of maintaining long access lists with different user names for every different application server? Do your WebSphere (Weblogic, Domino, etc) admins not want to give TADDM access to the administration accounts? Are you interested in discovery via ITM? If you answer "yes" to any of these questions, you should investigate script based discovery in TADDM, and this Support Technical Exchange (STE) will be a great place to start.

When we write STEs we use it as an opportunity to "deep dive' into a particular set of functionality. Sure, we read the manuals, but we also run scenarios in the lab, crawl through log files, look at old PMRs, and talk to our development team to give you a full picture of how the function works, how to set it up, how to troubleshoot it, and what the "gotchas" might be. You'll walk out with a much deeper understanding of script based discovery.

This STE will cover the differences between normal discovery, script based discovery, and asynchronous discovery; understanding script based discovery; configuration for script based discovery; access; TADDM configuration; logging; and asynchronous discovery.

Use the following link to find the schedule and connection information:

The problem with this rule is that it adds in the PCRS, UCRS, DCRS databases which are used for Production, UAT and Development respectively.

Now, you might think to yourself, gee, this is pretty easy, just re-create a business application specific to production and put in the full database name like 'PCRS' and then do the same thing for Development, UAT and Test. Did I mention that they have 150 apps? Assuming we need to create 3 apps for each of the base apps, that is 450 apps! Cutting and pasting the rules is onerous. The TADDM UI doesn't currently have any way to copy a business application template, so I opened an Enhancement Request (http://www.ibm.com/developerworks/rfe/).

Since I needed the capability this week, I built a jython script that makes a copy of an AppTemplate along with its corresponding MQLRules. This script can be downloaded here: jython script to copy an AppTemplate

Copy the script to $COLLATION_HOME/support/bin and invoke it as follows:

$ ./cpba.jy administrator collation "TADDM" "TADDM - Production"

It will login to TADDM as "administrator" with the password "collation" and make a copy of the TADDM AppTemplate to the new name "TADDM - Production".

Initially, I created the app and just set the new name to whatever was passed in on the command line (something like "TADDM - production")

What I found was that the resulting business app would not show up in grouping composer nor in the list of business apps. In addition, I set the MQL rules to the same values as were in the source Business App. This caused problems because the name of the MQL rule was the same in both apps meaning that the GUID for the MQL rules was also the same, hence changes to a rule in one business app would cause the rule in the other business app to change.

What I found was that I had to do the following:1) Create a business app and set its name to the name passed in on the command line (like "TADDM - production") and get the GUID returned2) The name of the AppTemplate had to be set to BA_GUID:BA_NAME (something like "AA0A20EE5BBD336481279CA664FB380A:TADDM - Production")3) The names of the rules had to be prefixed with the BA_GUID as well - but with no colon (something like "AA0A20EE5BBD336481279CA664FB380Adatabase")

The javadoc or class documentation showing these attributes should be updated to indicate the required format and I have opened a Documentation PMR on that.

You may be wondering what all these sensor options are. TADDM now has script-based sensors, Asynchronous Discovery(ASD) sensors, and the original 'legacy' sensors TADDM always provided. How do you choose which to use? This has been an important topic with one of our TADDM customers lately and I thought it would be worth sharing the information I received below from development.

First let me describe each one;

Legacy sensors : these are the java based sensors that have always come from TADDM and the default option for discovery, they are run on the TADDM discovery server.

Script-based sensors: where discovery logic (the script) is separated from parsing/transformation logic (java/jython), but the scripts are launched from a sensor on the TADDM discovery server.

ASD: this is the same as script sensors but they are run manually on the target and the results are transferred to the TADDM discovery server for processing.

In general, script-based discovery has advantages over the legacy java sensors, there are exceptions which I will note below the ones I know about. The advantages are;

1) script sensors are fairly simple and are written in platform shell scripting languages (sh(*NIX) and powerShell (windows)) and hence:

- the user can review all the commands that are going to be executed on discovery target

- the user can even change script commands as long as the output is unchanged (e.g. add sudo) - although he will be required to merge manual changes when TADDM is updated

You can view the scripts in the appropriate sensor sub-directories under $COLLATION_HOME/osgi/plugins/ on your TADDM discovery server.

2) sensors scripts can be extracted into separate independent package and run outside taddm discovery (Asynchronous discovery) - on the targets where direct access from TADDM is not possible due to connectivity or access issues. That also generates another interesting possibilities)

2.1) sensors scripts (ASD package) can be send to the host owners when they are not able/willing to provide credentials

2.2) sensors scripts (ASD package) can be send by TADDM independent host access methods (e.g. using IEM to perform ASD discovery without need for any credentials and scopes in taddm)

3) script based sensors are transferable via 'discovery over ITM infrastructure' - for non-scripted ones this is possible only for sensors using SSH access (typically that is limited to Level2 sensors - but not always).

The Websphere sensor is a good example of sensor change that with move from JMX to script based benefits from all 1-4 above. It can work now though ITM, and require only OS level credentials just to be able capture some files. A number of customers use the Websphere script based sensor because getting Websphere access is much more difficult then just OS file access.

However, Oracle sensors on the other hand benefits from 1-3, but as for the 4th - the trade off done in the sensor today is that the legacy sensor requires internal oracle user (via jdbc) but the script sensor requires not just the OS-level account user but also the Oracle 'instance owner' user(owner of oracle_home). This is a fairly rigid requirement that some users may not have access to. So in this case, the Oracle sensor in script mode actually has tighter credential requirements. You can read more about this here: https://www-304.ibm.com/support/entdocview.wss?uid=swg1IV57766 So depending on your environment, the Oracle script sensor may not be the best option.

Although scripted versions of sensors tends to have more advantages over the 'legacy' approach there are still certain technology limitations e.g. network devices, appliances and some application which are not allowing script approach to be used, so although scripted approach is the preferred one whenever possible still some sensors will continue to use 'legacy' approach due to these technology limitations. In order to determine which sensor has a scripted option please refer to the TADDM sensors guide here;

If a sensor supports scripted discovery, it will have a sub-section "Asynchronous and script-based discovery support" which lists the support level, configuration and access requirements and known limitations. This information can help you decide if the scripted version of the sensor is a good choice for your environment.

So, like I said above, the 'legacy' sensors are the default, but where possible scripted sensors are preferred as long as you understand the limitations. If you want to get started switching to the scripted sensors, here is a link to the configuration options;