Friday, May 8, 2015

This is a short article that deals with artifact life cycle through obsolescence and modernization. It also gives some hints when designing artifacts to delay obsolescence and anticipate modernization.

Artifact life cycle

The following diagram illustrates the life cycle of any artifact:

Artifact life cycle starts with the implementation of a modern artifact.

But time goes on, and leads inevitably to a software erosion :

the artifact does not match technology standards,

users experience is degraded because of too old user interfaces,

huge technical debt,

security vulnerabilities,

insufficient features,

high TCO

At this moment, there is 2 possible choices: modernize the artifact or build a new one that meet the standards and finally end the cycle with the process of decommissioning of the artifact.

Modernization will be realized through rewriting code, replacing frameworks, platforms and designing new architectures. It is preferable to automate the process of modernization.

Delay obsolescence

To prevent obsolescence to occur too soon, you can try to:

carefully build your software stack to rely on proven frameworks that will last for at least several years

set up a continuous inspection platform to measure the software code quality and keep a high maintainability of the source code

rely on an efficient software factory to build, assemble and deploy. That way, it is easy to evolve and maintain your artifacts.

standardize your practices and architecture choices to avoid the maintenance of a too heterogeneous set of technologies across your artifcats.

Anticipate modernization

To simplify the process of modernization, you can:

favor simple design to ease the reverse engineering activity when modernization team works on artifacts

design by abstraction to easily replace one specific part, without rewriting the whole artifact

abuse of automated tests to lower the cost of testing changes induced by modernization and spot with ease any regression that modernization could introduce

Sunday, February 23, 2014

This short article introduces twiddle, a command line utility to administer a JBoss instance and shows how you can monitor a given instance of JBoss with it through a shell script.

JBoss provides a simple command line tool that allows for interaction with a remote JMX server instance. This tool is calledtwiddle (for twiddling bits via JMX) and is located in the bin directory of the distribution. Twiddle is a command execution tool, not a general command shell.

#!/bin/sh
# path to the log file (the directory /var/log/jboss must exist. Adapt this path to reflect your needs)
LOGFILE=/var/log/jboss/jbossmon-`hostname`-`date +"%Y%m%d"`.log
echo Logging JBoss metrics to $LOGFILE
# function that uses twiddle to get jboss metrics, given:
# a first parameter that is the path to the MBean
# a second parameter that is the name of the attribute of the MBean
# Notice that you may need to adapt the call to twiddle to use authentication or a dedicated port
# e.g: ./twiddle.sh -u user -p password -s jnp://localhost:9099
# Also update the path to the twiddle.sh, if this script is not saved in the same directory, i.e the bin directory of the JBoss distribution
function getJbossMetric() {
./twiddle.sh get $1 $2 | cut -d'=' -f2
}
# collects metrics every 5 mn forever (naive approach)
# Notice that a better way would be to replace the loop by a scheduler like cron
until false
do
# collects some metrics within JBoss using twiddle
FREEMEM=$(getJbossMetric jboss.system:type=ServerInfo FreeMemory)
TOTALMEM=$(getJbossMetric jboss.system:type=ServerInfo TotalMemory)
THREADS=$(getJbossMetric jboss.system:type=ServerInfo ActiveThreadCount)
# NB: you need to update the path property to reflect the context path of your web application
SESSIONS=$(getJbossMetric jboss.web:host=localhost,path=/contextPathOfMyWebApp,type=Manager activeSessions)
echo "-------------------------------------"
echo "Free memory: $FREEMEM"
echo "Total memory: $TOTALMEM"
echo "Active Threads: $THREADS"
echo "Sessions: $SESSIONS"
echo "-------------------------------------"
# append metrics to the log file in a CSV manner
TIMESTAMP=`date +"%d/%m/%Y %T"`
echo "$TIMESTAMP;$TOTALMEM;$FREEMEM;$SESSIONS;$THREADS" >> $LOGFILE
# waits for 300s (5mn)
sleep 300
done

I hope this will be useful to those who want to monitor JBoss in a simple way.

To conclude, you can see how it is easy to monitor the memory usage of your objects. It is very handy when dealing with huge collections, or when using caches such as the ones provided by Guava or EHCache. That way you can setup trigger that alert when memory consumption is excessive.

UPDATE: 2013-07-30

To answer to the rxin's comment, i did a quick (trivial) test with a list of 1,000,000 random strings, that represents 116 Mo in memory.

For the test, i sized the heap with the following JVM options: -Xms256m -Xmx256m.

Below is the memory usage during the test:

From my point of view, the memory overhead introduced by JAMM is negligible in that simple test case, but notice that the measureDeep() method takes time (reflection is slow).

Screenshot

Finally, a screenshot of the PrimeFaces line chart representing the memory usage :

Notice that it is possible to zoom with drag and drop, if you have a lot of points displayed.

Conclusion

As you can see, it is quite easy to monitor the JVM and to display collected metrics within a JSF page. You can go further by monitoring CPU usage for instance, or by adding some features like starting/stopping/resetting the monitoring.

Sunday, May 5, 2013

This article explains how Java instrumentation works, and how you can write your own agent to do some basic profiling/tracing.

Overview

JVM instrumentation feature was introduced in JDK 1.5 and is based on byte code instrumentation (BCI). Actually, when a class is loaded, you can alter the corresponding byte code to introduce features such as methods execution profiling or event tracing. Most of Java Application Performance Management (APM) solutions use this mechanism to monitor JVM.

Instrumentation Agent

To enable JVM instrumentation, you have to provide an agent (or more) that is deployed as a JAR file. An attribute in the JAR file manifest specifies the agent class which will be loaded to start the agent.
There is 2 ways to load the agent:

with a command-line interface: by adding this option to the command-line: -javaagent:jarpath[=options] where jarpath is the path to the agent JAR file. options is the agent options. This switch may be used multiple times on the same command-line, thus creating multiple agents. More than one agent may use the same jarpath.

by dynamic loading: the JVM must implement a mechanism to start agents sometime after the the VM has started. That way, a tool can "attach" an agent to a running JVM (for instance profilers or ByteMan)

After the JVM has initialized, the agent class will be loaded by the system class loader. If the class loader fails to load the agent, the JVM will abort.

Next, the JVM instantiates an Instrumentation interface implementation and given the context, tries to invoke one of the two methods that an agent must implement: premain or agentmain.

The premain method is invoked after JVM initialization and the agentmain method is invoked sometime after the JVM has started (if the JVM provides such a mechanism). When the agent is started using a command-line option (with -javaagent), the agentmain method is not invoked. The agent class may also have a premain method for use when the agent is started using a command-line option. When the agent is started after JVM startup the premain method is not invoked.

The agent needs to implement only one signature per method. The JVM first attempts to invoke the first signature, and if the agent class does not implement it then the JVM will attempt to invoke the second signature.

Byte Code Instrumentation

With the premain and agentmain methods, the agent can register a ClassFileTransformer instance by providing an implementation of this interface in order to transform class files. To register the transformer, the agent can use the addTransformer method of the given Instrumentation instance.

Now, all future class definitions will be seen by the transformer, except definitions of classes upon which any registered transformer is dependent. The transformer is called when classes are loaded, when they are redefined. and optionally, when they are retransformed (if the transformer was added to the instrumentation instance with the boolean canRetransform set to true).

The following method of the ClassFileTransformerinterface is responsible of any class file transformation:

And if you enable code coverage in your favorite IDE, you should have something like this:

Red regions show us parts of the code that are not covered by the test. In our case, we can see that the code that handles IO exception is never invoked. Then, it is not possible to cover all the code of the method without raising an IOException. Hopefully, there is a powerful tool for this: ByteMan.

As explained by the authors:

Byteman is a tool which simplifies tracing and testing of Java programs. Byteman allows you to insert extra Java code into your application, either as it is loaded during JVM startup or even after it has already started running.

Actually, ByteMan is a java agent that does instrumentation and we are going to use it to inject IO exception during the execution of our test to check the code that deals with it. The idea is to throw an IOException when the method FileOutputStream.write(byte[]) is called.

Firstly, the dependency for ByteMan:

org.jboss.bytemanbyteman-bmunit2.1.2

Secondly, modify the previous test to add ByteMan to our JUnit and write a new method to test the code that handles IOException:

targetClass and targetMethod: to set the method that must be instrumented

targetLocation: to set when to inject the code (at the beginning, at the end or somewhere in the body of the method)

condition: to set some condition to be met

action: the code to be injected

So, we have a rule named "IOException Rule" that is activated when the method write(byte[]) of the FileOutputStream class begins. The rule then injects a given code that throws an IOException. When the shouldHandleIOException test runs, every time that write(byte[]) is called then a java.io.IOException is raised.

The log(String) method is now fully covered (the implicit constructor of the LogHelper class is not tested, explaining the missing percents to reach 100%).

Conclusion

ByteMan is very useful to test code that deals with Exception, especially fault tolerant one. That way, you can check that your fault tolerance strategy is really resilient, or that your code gives you relevant traces when errors occur.