Hello everyone, I had some test automation I was working on ... and I needed to gather a good bit of WebSphere config information and publish it in a form that my automation could use. There were numerous reasons why wsadmin (a good solution in general) was not the ideal solution for me. I wrote a single java class that processes my small to medium WAS install in about 2 seconds and creates a properties file (ideal for ant) as well as a json file (ideal for anything else) with all of the info from the config.that I needed. This is not what I would call a "best practice" in that it may need to change as the config changes, but it is using relatively stable xml files and parsing them (so additions to the schema (which is tne norm) should not impact it unless I want what is added). The JSON output is hierarchical and thus does a nice job reflecting the config. For the properties (flat structure) I nested the keys (ie: <cell>.<node>.<server>.httpPort=9080). This is still relatively easy to use in ant.

To run this speedy config collector, simply download https://www.ibm.com/developerworks/mydeveloperworks/blogs/ServiceabilityDev/resource/GetWebSphereTopology.zip and unzip it (to get the jar) and run it with the comments right inside the source (which is in the jar). You will need the JSON4J jar to get the IBM JSON library .. but I did not include that not knowing about its copyright et al (it ships with WebSphere and several related products). If you do try it and find that you need some config that this is not collecting or that it is not behaving as expected, please ping here and I'll do my darndest to fix it as soon as possible. Thanks,

I've gotten a request to take one topic (access to remote logs with HPEL) out of order and provide some guidance on that sooner rather than later. Please look at the previous entry for more fundamental background, but I need to provide a little more background here about remote access. HPEL (the High Performance Extensible Logging component available in WebSphere V8) has numerous ways to access logs remotely:

SSH or telnet to the machine and use the commandLine LogViewer or the API

zip up the log repository (logs directory, usually <profileHome>/logs/<serverName>, and send it to the machine where you want to do the analysis

Use the Admin Console LogViewer

Use the API to access remote content.

The first 3 are pretty self-explanatory (if not, let me know), so we're going to focus on on #4. HPEL employs JMX to communicate to remote machines. If you've used a JMX before, please don't quit yet. We have layered around it with some helper classes which makes it almost painless (OK, yes you need the thinClient jar, and if there is security in the cell or server ... you do need to deal with that in your env vars et al ... but bear with us here). What you'll notice is that accessing logs from a remote server is the same as accessing logs from a local server. In our example here, we're only going for information from the current serverInstance, but you could get data from any or all server instances with one query. In that you are sending data potentially through many hops, you will want to use some of the filtering we'll see soon on this blog. The hops come in when you connect your machine to the DMgr which connects through a NodeAgent to a server .. your data has to go from the server to the nodeAgent, to the DMgr, and back to you. That's a fair amount of serialize/deSerialize and transport. Thus, while you can bring back as much data as you'd like, doing some smart filtering is a good idea.

So we have annotated a sample that gets all Info, Warning, and Severe messages from a remote server. It does use a helper class which protects you from doing any JMX yourself, and that helper (and more) were referenced in the last blog entry (and if you can somehow find my file uploads, it is there too). So let's look at our sample code first.

You need to pass in all the parms needed to contact a JMX server and give it addressability to the server you're interested in. So the cell, node, and server tell us what server you want to talk to, and the host, port, and connection type connect you to a JMX server in the cell.

So what's with that serverContext? Well, what if the server whose logs you want to view is down? You cannot get to an MBean if the server is down. This allows you to hook in to the nodeAgent instead of the server, but tell the MBean in the NodeAgent not to get his logs, but instead to get the logs from the server you specify in the server context.

You pass all the startup parms to the RemoteReaderHelper so that he can do all the JMX plumbing we all hate to do

This single line does all the real HPEL work for you, so let's look at it.

Note that the helper class takes the same APIs as the local sample since it is a subClass of the API itself, so what works with the local API works with the remote

The first parameter here needs to be a Date that will identify which serverInstance you are interested in. By leaving the Date null, the assumption is that you want the latest serverInstance

The next parms are one signature for this method that allows you to specify message severity. We'll later look at using a LogQueryBean which enables serious filtering

That query (since it was only one serverInstance) allows you to iterate through the log records that met your criteria with this simple for statement

Note that we kept it brief by just doing getFormattedMessage(). Every piece of information from the logRecord is retrievable as we'll see later.

If you look into the samples made available from my uploads or referenced on the WebSphere beta site (linked to from the prior article) ... you will see the helper class, and the final JMX classes that it calls. With those helpers (unchanged), you should be able to make code like this run. Then when you combine it with some of the upcoming filtering and merging samples, you will be a logging analysis phenomenon.

Note: If you look at public files from me, you will find the library of samples which includes the helper classes.

This article and attached samples are about using the simple High
Performance Extensible Logging (HPEL) API. For most typical analysis,
the LogViewer tool in the Admin Console and the LogViewer tool on the
commandLine already provide great capabilities and you may never need
more. If you do though (or if you are a tool developer at heart), to
paraphrase Malcolm and Angus (AC/DC) ... "For those about to code, we
salute you!!" I'm not going to dwell too long on the benefit of simple
APIs vs nonDeterministic parsing of many differently formatted records
et al ... but I am going to provide a sample library with a world of
gems, and do a deep dive into various types of samples (one in each
form/blog entry). The types of samples we'll cover are:

1. Simple filtered query
2. Simple filtered query with z/OS - z/OS has the controller/servant
concept, but we did all we could to make analyzing z/OS logs similar.
In samples after this, we'll just show how you would tweak it for z/OS
3. Merge of logs from multiple servers
4. Merge of all logs from a specific z/OS server together (ie: controller and all servants merged into one log
5. Access to remote logs
6. A simple central logging solution
7. Polling/monitoring sample

Beyond that, these are forums (and blogs) ... so I will provide
additional feedback, samples, ... as desired. I admit it, I think HPEL
is awesome and ground-breaking, and I will do what I can to help people
get involved. So feel free to "use me" (er uh, use my enthusiasm) to
get me to customize samples et al. Many possible topics (running
RepositoryLogRecords thru JSR 47 formatters, advanced filtering, various
combinations of this functionality, WebUI type samples, ...) can be
discussed, I'll let the audience drive the direction.

So let's start out with the sample library, if you are viewing this on
my DW blog, I uploaded the samples but ... I'm just getting used to this so I can only hope that that uploaded file is something you can navigate to. If not, you should be able to get it via the beta web site entry here. Getting involved in the open
beta is also a great way to get the code. For anything NOT involving
remote, all you need is the small com.ibm.hpel.logging.jar. For remote,
we use JMX, so you need the admin thinClient (only about 200x the size
... sorry, I cannot control that one).

BackGround:
OK, let's start with some background. You need not be sitting on machine
A to view the logs from any server on machine A. There is remote
access (via JMX) and more easily, you can just zip up the logs
directory, bring it local, and view it there (with the commandLine tool,
or with the API). And of course, the Admin Console LogViewer can also
view remote logs. We have optimized remote access but ... as you might
guess ... if you need to do lots of analysis, it's not a bad idea to
just bring a repository local. They're all created with java. so you
can view a Z repository on windows, a linux repository on Z, or whatever
your heart desires. You can view an active repository while it's being
written ... or you can zip a snapShot up, send it elsewhere, and
analyze it there.

Definition of terms:Log Repository - Base directory into which a JVM is logging. For
WebSphere, this is usually of the form
<ProfileHome>/logs/<serverName>/. This is generally all
that needs to be known in order to access any logs locally.Server Instance - Start and stop of a server one time (one lifecycle).
If a server is started 4 times and stopped 3 (ie: currently running),
then there are 4 server instances. The "current" server instance is
always the latest, regardless of whether or not the server is running
Local Repository - Any repository that can be reached by referring
directly to its directory locally (including network mapped drives et
al).
Remote Repository - A repository that is accessed via network protocols
(default for this is JMX which requires that the server is up).
Technically, you can access your local repository via JMX, but other
than testing ... not sure why you'd want to
Child process - process which is a logical child of another process from
a logging perspective. Prime example is a z/OS servant is a child of
the controller. You may see that the repositories are underneath the
parent repositories, are generally accessible only through their parent
Merge - The aggregation of logs from several repositories (local or
remote) based on time. The aggregation looks much like a
ServerInstanceLogRecordList, but there are special techniques for
accessing the header associated with a given record (which is helpful
since there can be many different headers associated with a merge).
ServerInstanceLogRecordList - the class that represents the logs for a
single ServerInstance (this may logically represent many physical files
as you'll note we don't want people thinking in terms of physical
files). It contains RepositoryLogRecords and headers
RepositoryLogRecord - a class that represents a single log entry. It provides get methods for all info in the record

Build information
As mentioned, samples that do not involve remote access require only
the small hpel jar (< .5Mb) to build and run. This covers all
filtering, merging, and even some formatter code. For remote samples,
you will need the admin thinClient jar that comes with WebSphere V8
(unless you've successfully gotten the JMX packages in the JDK to do the
job for you). So to build and run the samples, here are some sample
commands:

Building a sample that uses only local repositories:
javac -cp <WasHome>/plugins/com.ibm.hpel.logging.jar:. com/ibm/sample/hpel/ReaderSamplesForExercises.java

Simple Local Repository Sample
OK, now that you know what it all looks like, let's look at our first
sample ... a simple run through the records in the local repositories
that have severities between INFO and SEVERE (inclusive). Here is the
code (full source including imports and more comments in sample library
here):

public class LocalReaderSample {

public static void main(String[] args) {

// Create a repository reader (requires base directory of repository
1 RepositoryReader logRepository = new RepositoryReaderImpl(args[0]) ; // Get iterator of server instances (start/stop of the server) extracting all log messages with// severity between INFO and SEVERE. Lots of different filtering options, this is just one sample2 Iterable<ServerInstanceLogRecordList> repResults = null;try {

// For each server instance, go through the records
5 for (RepositoryLogRecord repositoryLogRecord: pidRecords) {

// Just printing some key information here. Note that the repositoryRecord exposes all fields
// with simple get methods
6 System.out.println(" "+repositoryLogRecord.getFormattedMessage());

}

}

} catch (LogRepositoryRuntimeException lrre) {

System.out.println("Exception while retrieving data: "+lrre);

}
}

}

I put numbers to the left of the noteworthy lines. The other lines are
pretty straight-forward java, the numbered lines focus on the API. As
you look at each line:
1. This is a simple line to open the repository. You need only specify
the location of the root of the repository (${SERVER_LOG_ROOT} in
general)
2. This is going to hold the data. It is an Iterable of lists.
Basically, you can iterate through the serverInstances in the
repository, and for each you can iterate through the actual log records
(you'll later see samples that focused on just the latest instance and
avoided the double-iterating ... but this is a good sample that shows
how easy it is to see the info and how well organized it is ... so bear
with me).
3. Thie getLogLists call does 95% of the work. It includes some simple
filter criteria (severity range ... later you'll see a better way to do
more advanced filtering) and it gets you access to all of the log
records that meet this filter criteria in one line of code. How cool is
that!! Oh, and your asking ... why bother with INFO to SEVERE, isn't
that SystemOut.log? Well, with HPEL, log and trace are separated, but
you don't need to care because we take care of the details for you. If
you want log and trace together, just ask, we take care of it. Another
thing you may notice missing here ... how do we handle file rollovers et
al? Do we just get you from the latest roll? NO!!! (would I even have
brought it up if it was not another cool HPELism). We do file
rollovers et al just like legacy ... but our API (and our tools) mean
you don't need to care. We get you the data, you just ask.
4. This is the first layer of peeling the onion. It goes through the serverInstances one at a time
5. This is the second layer of the onion, for each serverInstance, this goes through the records
6. OK to keep the sample simple, this lamely just gets the formatted
message and prints it. But if you pull this into eclipse, you'll see
that you can get any info you want simply by calling the appropriate get
method on the RepositoryLogRecord. If I want to know the logger
(warning here, the logger in the old SystemOut.log is truncated front
and back ... this gives you full logger ... take a breath before
continuing because if you understand this significance, you will be
excited), simply do repositoryLogRecord.getLoggerName(). No parsing, no
figuring out which type of record this is ... just ask for the info,
and it is served up.

So now you've seen how simple it is to go through all serverInstances
for a server and pull records. You'll see lot of variations, but these
concepts are needed going forward. You'll see how remote access (when
you use helpers to shield you from the jmx gorp) looks amazingly
identical to the local access. You'll see merging is so simple you
shouldn't even take credit for doing it (but take the credit anyway),
you'll see that advanced filtering is as easy as getting the data in the
first place. If I hear feedback on this, I can provide sample
repositories, explanations, customized examples, et al. I'm here and
I'm listening. Talk to me.