This post outlined a possible way to test time-dependent logic by abstracting the concept of current time behind a clock interface and providing a mock clock implementation for the tests. This is important because otherwise relying on the system clock to test, say, if a particular action is triggered every day at 12.00 might be tricky.

JodaTime makes it easier to implement the above. It already provides a MillisProvider interface (to abstract the concept of current time) which can be accessed through methods from the DateTimeUtils class, thus saving you from writing your own.

//fix the current time to 1000 millis
DateTimeUtils.setCurrentMillisFixed(1000);
... run time-dependent logic
//time is still at 1000 millis
long time = DateTimeUtils.getCurrentMillis();

Rule number 1 while running Jbehave tests (or any integration test really) is to execute the test scenarios independently from each other. Meaning the application state must be reset before each and every scenario run.

The code below presents a generic way to accomplish this reset while using JBehave and Spring. This approach is articulated around 2 components:

1) ScenarioContext

A class responsible for creating and cleaning up the data used by the test scenarios. This usually encompasses all kind of static and reference data plus any messages being enqueued/dequeued at the boundaries of the application (in this particular example the messages being enqueued are trade messages).

This class is responsible for cleaning up the scenario context whenever a new test scenario is being run. It is defined as a custom scope Spring bean which creates a new instance of the object(s) under scope whenever JBehave is about to run a new scenario (the @BeforeScenario below is a JBehave annotation which allows the annotated method to run before a scenario).

The builder pattern allows for the construction of an object step by step (properties by properties). This comes handy while writing tests as this is when we want to instantiate the object being tested precisely in the state deemed useful for the test.

When the object under test is a “Thing”, with 2 properties name and description:

Then the first approach to constructing a builder for the Thing is to expose a list of “with-er” methods (chained setters under another name) eg. “withName(…)”, “withDescription(…)” and so on, where each method maps to a property of the Thing.

See how with Jackson the code required to instantiate the object under test is less verbose and more compact than the alternative approach. The downside is a slightly more fragile code (more prone to typos) since the json string cannot be checked at compile time.

Testing methods which log exceptions can result in a messy build log, peppered with stack traces and error messages, without any obvious way to discern whether these errors are intentionally triggered by the tests.

Example:

The above test will succeed but also produce the following output in the logs:

The above output is in this case undesirable and can be hidden by using a junit rule which will run before the test to set the logging level to OFF, and then back again to it’s original level once the test is finished.

Are “Lock free” structures necessarily more performant (quicker) than the traditional approach to synchronization, relying on locks ? How does the number of threads impact performance ?It’s time for a little speed test.

An AccountDate class (a simple wrapper around a Date) is updated 1,000,000 times in two scenarios:

Unsurprisingly the SimpleAccountDate is very fast (but wrong). Performance under synchronization degrades only very slightly with the number of threads, while the CAS strategy does not scale nearly as well. This can be explained by the higher number of threads causing higher contention on a single accountDate object, with many threads simultaneously attempting to update this object, but only one thread at at time managing to do so.

To find out how many messages are received per minute: find the log lines containing the word ‘Received’, extract the hour+minute on each of these log lines, discard all duplicates and count the number of occurences.

Latency is the time taken for a message to travel from one system to another. Consequently the average latency is the sum of all latencies over the total number of messages processed (i.e the inverse of the throughput, which is total number of messages processed over total time taken to process these messages).

…Right ?

Wrong, in most cases. The above reasoning does not take into account the distribution of the latencies. The arithmetic mean / average when applied to a skewed distribution can be meaningless at best, and misleading at worst.

EXAMPLE

It takes 1 ms for System B to process each of 199 messages, and a further 801 ms to process the 200th message.
Throughput -= 200/1000 = 0.2 msg/ msec
Latency = 1/Throughput = 5 msec

…which system is “better” depends on the expectations of the client but clearly the average latency and throughput are identical even though these systems exhibit significantly different performance characteristics.

HOW TO FIX

Method 1)

The simplest and quickest method to get a more accurate representation of the “typical” latency is to take the median, which is better suited than the arithmetic mean for skewed distributions

median for system A = 5ms,
median for system B = 1ms

Method 2)

Use an histogram. You can build your own or re-use an existing one. The code below uses the Histogram class which is part of the Disruptor package to print out the upper bound within which 99% of observations fall.

Demonstrates a few of the features of Scala such as for yield comprehensions, “static method” (or rather equivalent thereof), flatMap, reactions, traits..etc…Possibly not the game of the year, but Scala does make it fun to write even the simplest piece of code.

By default JBehave lays out HTML report data tables horizontally, with headers on the first row and values on the second row below. In some cases though it may be more visually appealing to organise the data vertically instead, with headers and rows in two separate columns.

Not to worry – JBehave can be heavily customized (heavily, but not easily… documentation is scarce). HTML Tables in particular are rendered by a freemarker macro “renderTable” located in ftl/jbehave-html-output.ftl. Simply edit this file and modify the below macro to swap headers and columns.

3) Finally the code required to make this test pass is actually written. In the example above this would be when the domain object Trade and its associated service TradeService are implemented. Note that in this scenario the TradeService is a dependency injected by Spring.

4) Last but not least – JBehave requires an entry point into the tests, a.k.a an Embedder. This is a piece of code which indicates to JBehave where to look for the stories files, how to handle failures, which reports to output …etc… Each of these behaviours can be easily customized.

There are several embedders to choose from but in this instance we use a “SpringAnnotatedEmbeddedRunner” because it provides Spring based dependency injection.

When JBehave runs a test report will be generated for each story. It will look like so if all goes well (all green !):

if something goes wrong instead the result will be:

Put together all of the JBehave reports will form a live documentation of the system. Any member of the team can check in realtime what is the expected behaviour of the system, without having to dig into the code. If JBehave is hooked into the continuous integration build, which is highly-recommended, these reports will never go out of date.

A PriorityQueue is a tree-like structure where the nodes are ordered according to a Comparator function, eg. the lower the value of the node the higher up it will be in the tree, with the root node being the node with the minimum value.

New elements are originally inserted at the bottom of the tree then “sift up” to their appropriate position in the tree. Each sift up operation compares the value of the node to the value of its parent, and if the value of the node is smaller then both nodes are swapped.

Cost of the insert operation is then function to the number of sift up operations to execute…which will be function of the height of the tree.

How high is the tree then ? In a balanced binary tree each parent will have two children nodes, so:

A sparse array in Java is a data structure which maps keys to values. Same idea as a Map, but different implementation:

A Map is represented internally as an array of lists, where each element in these lists is a key,value pair. Both the key and value are object instances.

A sparse array is simply made of two arrays: an arrays of (primitives) keys and an array of (objects) values. There can be gaps in these arrays indices, hence the term “sparse” array. Example source code here.

The main interest of the SparseArray is that it saves memory by using primitives instead of objects as the key. For instance the screenshot below (courtesy of visualVM), shows the memory used when storing 1,000,000 elements in an sparse array of <int, String> vs an HashMap of <Integer,String> :

The difference in size between both structures can be explained by :

– difference in size of the key: an int is 4 bytes while an Integer is typically 16 bytes (JVM-dependent).

– Overhead of a Hashmap entry compared to an array element. ie. a HashMap.Entry instance must keep track of the references for the key, the value and the next entry. Plus it also needs to store the hash of the entry as an int.

Methodology: use the junit benchmark framework to measure the time elapsed with and without bit-twiddling for four operations (each executed several million times): multiply, divide , modulo and finding the next power of two.

Configuration: JDK 8 running on MacOS 10.7, intel i5 processor.

Results

For each test two results are shown: time taken without bit-twiddling first, with bit-twiddling second.

There’s no discernable performance gains when using bit-twiddling to execute multiplications, divisions or modulo. This is hardly surprising as these simple utilities should already be heavily optimised by the JVM.

Less heavily JVM-optimised code (eg. find the next power of two) shows large potential performance wins, which may, in some cases, justify the less readable code associated with bit shifts techniques.

The ArrayList is ubiquitous in Java programming and is often picked as the default List implementation. Is this always a good choice ? Let’s take a peek at the source code for some of the key methods of ArrayList.java and find out.

Data structure

As the name implies an ArrayList is backed by an array, no surprises here.

Pretty straightforward – check that the index parameter is valid and return the element located at this index. Because all elements are stored as Objects a cast from Object to the type parameter of the ArrayList is required beforehand.

First check that the array is large enough to accomodate the new element (if not grow the array). Then add the element at the end of the array. Growing the array involves calculating a new capacity and copying the existing array into an array of bigger capacity, an expensive operation:

elementData = Arrays.copyOf(elementData, newCapacity);

The more elements in the array the more expensive the copy will get.

To remove an element

Removing is another potentially expensive operation as another array copy is required (unless the element removed happens to be the last element of the array). An element is removed by taking all the elements on its right hand side and copying them in place of where this element used to be.

Note the increment of the modCount variable above. This variable is written to by any operation which structurally changes the content of the list (ie. add, remove, clear…). And it is read by methods iterating upon the content of the list to detect wether any structural changes occurred. If that’s the case then the iteration may yield incorrect results and a ConcurrentModificationException is thrown.

In Summary:

An arrayList ideal use case is when the number of structural operations is kept to a minimum, because 1) they dont scale and 2) they might raise an concurrentModification exception.

point 2) can be remediated by using a CopyOnWriteArrayList instead. The tradeoff in this case is that any write operations (structurally or otherwise) will involve an array copy and therefore impact performance.

– The Concurrent CMS garbage collector will run (mostly) concurrently with the application threads. On the other hand the Parallel GC uses multiple threads simultaneously, none of which will run concurrently with the user threads (they effectively stop the application world)

What would happen if a HashMap used a key which always returned the same hashcode ? Answer in images.

The underlying structure of a HashMap is akin to an array, where each entry in the array is a linked list of key/value pairs. The hashcode of the key is used to quickly lookup one of the lists indexed by the array.

When the key produces well distributed hashcodes, each entry in the array points to a list with a small number (ideally just one) of key,value pairs.

If on the other hand the key always produces the same hashcode, then all array lookups will return the same list.

Our HashMap has turned into a linked list… with the consequence that lookup times are also similar to a linked list O(n) versus O(1) for the original hashMap.

There is no equivalent to c++ function pointers in Java. Which means that it is not possible to pass a method as a parameter to another method (unless reflection is used but I wont go there)… you can pass an interface instead though.

The Fibonacci sequence (named after the Italian mathematician Leonardo Fibonacci) is a sequence of numbers where each number is the sum of the previous two numbers. This lends itself quite well to a recursive approach:

cons: totally useless for anything else than a very small sequence. The dual recursive calls on the last line are performance killers eg. it takes several thousand calls just to calculate fibonacci(20).

pros: much quicker. computes fibonacci(2000) in under 400 microseconds on an intel core I5.

cons: calculating a sequence with a term greater than 10,000 is pretty much guaranteed to trigger a stack overflow error. This is because in the absence of tail call optimization each recursive method call involves allocating a new stack frame, finally exceeding the jvm stack size.

This leads to a third implementation where the logic is changed to “flatten” the recursive calls into a while loop like so:

The code doesn’t quite flow as nicely as the first recursive implementation – but more importantly – it wont trigger a stack overflow and it’s significantly faster than any of the two recursive functions eg. fibonacci(2000) is calculated within around 100 micros.

This post berates Java for being overly verbose compared to Scala. At times (most of the time ?) this is justified, however two examples in particular stand out, which I could not resist try and improve upon:

Loading content from the classpath (such as loading the content of a file into a string) is a fairly common task but still – there’s no api in the java 6 sdk which provides an easy/concise way to do it….(as far as I know, happy to be proven wrong).

The alternative is to use a little help from the Apache commons-io library. In 5 lines of code:

I installed Cyanogenmod (version 7) on a motorola atrix about 3 weeks ago now and I’m pleased to report it’s been rock-solid since. No stability issues whatsoever. No loss of functionality either – quite the opposite actually, because some android apps require superusers rights, so they can only work on rooted devices.

– proficient enough to install the android sdk and fire up some basic commands like adb ?

– brave enough to void your warranty (for sure) and risk losing your phone (a small chance if you follow the instructions laid out below, but still… a possibility)

—

if the answers to all the above is a resounding yes – then you should consider upgrading to Cyanogenmod, like three million users (and counting). The procedure to do so is fairly simple, head over to this page and follow the instructions specific to your phone. I recommend sticking to a stable distribution of Cyanogenmod when upgrading. Also if you’re new to this it’s probably best to try it with an out-of-warranty phone first.

I’ve followed the steps above with an Atrix 4G and it took about one hour to install Cyanogenmod (as a total beginner to ROM flashing). One minor niggle: for a while I could not get my phone to boot into Clockwork recovery mode – turns out it is necessary to reboot into recovery mode straight after having flashed ClockworkMod for the modification to stick. If the phone reboots normally instead the recovery from stock android takes over.

The end result:

– on the plus side: much improved battery life and response time. All the useless pre-installed apps from the phone vendor are gone (this in itself makes the upgrade worth it).

– on the minus side: the latest stable version of Cyanogenmod for Atrix doesnt ship with Android 4 – yet (although some of the latest unofficial ROMs do).

Cucumber is a tool used to support behaviour driven development. Originally written in Ruby there is now a JVM version, called, quite logically, Cucumber-JVM.

The basic idea (very summarized…) is to write acceptance tests for a new feature, together with the product owner, before the code is written. Then run the tests, see the tests fail, implement the missing behaviour, re-run the tests etc.. until the tests pass.

The key objective here is to involve the product owners as much as possible in writing the tests, which from experience can be tricky as they do not generally have a technical background. So it’s important for an acceptance tests framework to generate tests with a syntax which is a close as possible to a natural language.

Cucumber achieves this quite well, see below for an example Cucumber script (click to zoom in)

The right panel defines the tests scripts to execute, easily understandable by non-technical people. No messing around with HTML either, a big win compared with alternative frameworks such as Fitnesse or Concordion.

The left panel maps the tests scripts to their associated junit tests. Full code source for this example is at: https://github.com/eleco/bdd

Cucumber outputs the tests results in a nicely-formatted page like so.

Testing time-sensitive business logic is essentially about being able to change the current time in our tests – somehow – and then checking how this affects the behaviour of the domain object being tested.

The primitive and brute-force way to do this is to manipulate the computer system clock by manually changing the current time prior to each test… Crucially this approach does not lend itself to running as part of an automated test suite, for obvious reasons.

The other (better) way is use two different clocks: the production code can rely on the system clock while the tests code will depend on a custom clock, i.e is a clock which can be setup to return any particular time as the current time. Usually this custom clock will expose methods to advance/rewind the clock to specific points in time.

Both clocks realize the “now()” method defined in the Clock interface. The difference being that the now() method from SystemClock is a simple wrapper around a new instance of a Joda dateTime instance, while the now() method from the CustomClock returns a dateTime attribute which can be modified through the tick() method to make time pass faster :)The custom clock will be injected as a dependency of the testing code and the system clock as a dependency of the production code.

While this solution has the advantage of being easy to understand, it also has one drawback: the number of milliseconds the main thread must sleep for is most likely wrong:

– either it’s too small and the main thread will repeatedly awake too early, hogging CPU resources in the process (buzy wait)
– or it’s too large and the main thread will wake up long after all resources have been initialised, resulting in an application with sluggish behaviour (and irate users).

The proper way to handle this scenario is to coordinate both threads either by using the wait and notify methods from the Object class, or alternatively with a countdownLatch.

In the configuration below the log statements originating from the “com.firstpackage” package
will be directed to FirstFile.log, and the also to the console.
Log statements from “com.secondpackage” will go to SecondFile.log, and to the console.

Awk is a Unix programming language specifically dedicated to the processing of text files.

While it’s been around for ages (it’s almost as old as Unix) it remains fairly unknown and/or unused compared to other utilities such as grep, vi, find… Strange really as it’s powerful and quite easy to use.

Say some of the lines logged in there account for the time spent on one given algorithm:13:42:07,019 [Thread-1] DEBUG Calculation: time spent on algo #12 is 831 ms

It would be interesting to parse all the lines containing the word algo to extract 1) the total time spent on algo calculation 2) the average time spent across all calculations.

In Java, coding such a parser from scratch would take about one full day for most developers (if not more)
– locate file, open and read , manage io exceptions
– parse lines (manage parsing exceptions)
– calculate results and print
– create a build script

By comparison the equivalent script can be written with Awk in minutes.

proceeding step by step

Step 1.
to print to screen all lines containing the term algo in the file myfile.log

awk ‘/algo/ {;print $0}’ myfile.log

Note: Awk divides each line in columns, where a column is a block of text separated by whitespaces.
Eg. in the format above [Date] [Thread] [Debug]… $0 will print the whole line, $1 will print the date, $2 the thread number ..etc…

Step2.
To sum the number of lines with the term algo:

awk ‘/algo/ {nb++} END {printf “%d”,nb}’ myfile.log

[here nb is a local variable declared on the fly and used as a line counter]

Step3.
To keep a running total of the time spent on algo calculations:

awk ‘/algo/ {total+=$9} END {printf “%d”,total}’ myfile.log

[assuming the time spent on a calculation is printed on the 9th column of the line]

Step4.
putting it all together – print the total time spent + average on each calculation:

It seems that I discover something new in Netbeans every day… The “Show dependency graph” associated with every Maven projects in Netbeans 6.8 had escaped me so far.

This menu option (accessible by right-clicking on a Maven project in the “projects” window) generates a graph of all the projects’ dependencies (libraries declared in the pom and their transitive dependencies).

Example below of the dependency graph of the libraries required by a project called, rather imaginatively, mavenproject1.

The graph will be rather hard to read for a large number of dependencies, but it’s nice to have nevertheless.

The TPTP plugin for Eclipse in its versions 4.3, 4.4 and 4.5 was fairly bug prone – to the point of being barely usable. In particular it had a frustrating tendency to lock up the entire IDE…The latest version v.4.6.1 is more stable, even though setting it up still requires quite a lot of work… I only put up with it (just) because I prefer using Eclipse over Netbeans as my main development tool.

2.) Download the agent controller separately
In theory this step is not really needed, as TPTP now contains its own Integrated Agent Controller (IAC).
In practice the standalone agent is more stable than the IAC.

3.) Unzip the agent controller on your local drive.

4.) Add the profiler DLLs to your path. From the command line:

Set TPTP_AC_HOME=<path to your local agent controller installation>
set JAVA_PROFILER_HOME=%TPTP_AC_HOME%\plugins\org.eclipse.tptp.javaprofiler
Set PATH=%JAVA_PROFILER_HOME%;%PATH%;%TPTP_AC_HOME%\bin

5.) Create a file called filters.txt (for example) where you’ll specify the classes which needs to be profiled.

Content of the file=com.myclasses* * INCLUDE
* * EXCLUDE

This will profile all the methods of all classes in the com.myclasses packages.
Note that filtering is essential if you dont want to end up with hideously large profiling files.

6.) Run the application to be profiled
Add the following to the Java command line used to run the app so that all execution details are collected:-agentlib:JPIBootLoader=JPIAgent:server=standalone,filters=filters.txt;CGProf:execdetails=true;

If all goes well a file name called trace.trcxml (by default) will start collecting the profiling info from the current directory.

7.) Open the profiling view in Eclipse and import the trace file generated
(a popup menu will appear where you can select additional filters and specific statistics to be run on the trace file)
Be prepared to wait if you didnt specify a broad enough set of filters in step 5)

8.) Once import is finished right click on the profiling file and open it with the appropriate editor
eg. use ExecutionStatistics and ExecutionFlow if profiling run with execdetails=true

Caveat: Profiling done that way doesnt give a realtime feedback on the behaviour of the app being profiled
(the trace.trcxml file generated needs to be fed into Eclipse repeatedly for up to date results).

On the plus side this method works for all kind of processes, remote or local, libraries or main programs.

If all else fails there’s always the Netbeans’s profiler, which is very good, and free, or Yourkit (www.yourkit.com),which is excellent(but not free…) Another alternative is VisualVm , which offers both profiling and sampling capabilities.

All these tools work pretty much out of the box, in stark contract with TPTP.

“On demand computing”, as provided by Google Collections, turns a map into a computing map, which can be used as a basic cache for rare (but expensive) lookups.

Although the javadoc for this functionality is a good start, it can be opaque at times, especially for developers not entirely accustomed with the functional style of programming prevalent in the library.

void retrieveValuesFromComputingMap() {
Key k = new Key(...);
//the map generates a value from the key and stores it.
Value v =computingMap.get(k);
//subsequent calls to retrieve the value associated with the key will fetch it directly from the map,
// skipping any other computation
}

As can be inferred from its name, a ThreadLocal class (javadoc here) provides thread-local variables, ie variables for which each thread has its own independent copy. ThreadLocals are fairly prevalent in technical frameworks as a means of storing transactions and security contexts on a per-thread/per-request basis:

-> ORM frameworks such as Hibernate and Ibatis use it to bind each thread to a session.

-> A search trough the Koders website for ThreadLocal usages in Java projects returns more than 15000 hits…

Inner workings

Get/Set operations on a ThreadLocal instance respectively read/write to a HashMap of {ThreadLocal instance , Object value} key/value pairs, where the HashMap instance accessed is function of the current thread. Therefore a particular ThreadLocal instance could end up being associated with different object values, according to which thread is current (and which HashMap is being looked up).

Data structures involved

Thread
– Holds a reference to an instance of ThreadLocalMap.

ThreadLocal
– Defines ThreadLocalMap as a static nested class.
– Declares a custom hashcode, used as a key to locate entries in the ThreadLocalMap

ThreadLocalMap– Maps ThreadLocals to object values.

Operations

Creating a new instance of ThreadLocal:

ThreadLocal threadLocal = new ThreadLocal();

ThreadLocal object created, and associated with a newly generated ThreadLocal custom hashcode (used to search the ThreadLocalMap in constant time).

Setting a ThreadLocal instance to a specific Object value:

String aString="ThreadLocalTest";
threadLocal.set(aString);

– threadLocal retrieves the ThreadLocalMap referenced from the current thread.
– An entry for the {threadLocal,aString} pair is inserted into the ThreadLocalMap retrieved.

Getting an object from a ThreadLocal instance:

//outputs the value which has previously been set to threadLocal by the current thread...
//assuming we're on the same thread throughout this example, output value will be "ThreadLocalTest"
System.out.println (threadLocal.get());

– threadLocal retrieves the TreadLocalMap referenced from the current Thread.
– The ThreadLocalMap just retrieved returns the entry mapping to threadLocal.
– The Object value (“ThreadLocalTest” String) is extracted from the entry and returned to ThreadLocal.

Animating graphics is a rather protracted process with “traditional” languages (java, c++, c#…).

The logic needed to animate each object will involve (at minimum):

– Saving the old position of the object
– calculating the new position, function of time
– calling a graphics routine to erase object at old position
– calling a graphics routine to re-draw the object at new position

… this can easily lead to a fair amount of code, and complexity (and therefore potential bugs).

JavaFX simplifies the whole process significantly.

1- The bind keyword keeps data model and graphics interface synchronized at all time. Therefore no need to explicitly update the GUI when the underlying object position changes, JavaFX handles this for you. The snippet below draws a circle whose position will be updated whenever there’s a change in value of the fields x or y of the underlying object.

2- The Timeline class is used to specify the position of the animated object on screen at specific times. It’s as straightforward as specifying where the object should draw at specific intervals. JavaFX will interpolate between the specified times and positions, therefore no need to explicitly compute the object position, again it’s all done for you. The Timeline below will update the y field to be 70 at time t=0, 100 at time t=5. The values inbetween (times t=1,2,3,4…) will be interpolated.

The main application protocols relevant to the Java ecosystem, and how they can be setup to maximise interoperability:

Network protocols

The network stack is organized in layers (as defined in the OSI model), with each layer reusing the services of the layer immediately below.

From bottom to top, the main layers and protocols associated:

– Network protocol layer (#3 in the OSI model): IP

– Transport protocol layer (#4 in the OSI model): TCP, UDP. The tradeoff here is performance (UDP) vs relability (TCP).
Due to its inherent unreliability UDP is mostly used for video, gaming, chats, etc…
Note that it is possible to introduce some reliability on top of UDP, e.g. RUDP. TIBCO RV works on that principle.

By default the Windows Task Manager does not show the PID of a process… but this can easily be fixed:

1. Start the Windows task manager. This can be done either via the command line by running “taskmgr”, or by right-clicking on the Windows taskbar and selecting the “Start Task Manager” option. You should see something like this:

2. Now click on the “Processes” tab, select the “View” menu and pick the “Select Columns” option. Ensure that the PID checkbox is ticked.

3. Click OK to go back to the processes tab, and the PID should be visible:

The PID can be used in conjunction with the netstat command to find out which process runs on a given port:

netstat -aon | find “1234” will show which PID is associated with port 1234, then use the Task Manager to lookup the process associated with that PID.

– In the “Projects” view, right-click on the project to be JNPL-enabled.

– Select Application -> Web Start

– Tick ‘Enable Web Start’ and ‘Self-signed’

That’s it. A clean build will now produce a .jnlp file in the project’s dist directory, and if all goes well the next run will now invoke the Web Start mechanism.

With this configuration NetBeans will use a different certificate for each project when signing the associated jars. Which is fine.. until there’s a need to reuse a common set of jars from two different projects. This will cause Web Start to fail because: “JAR resources in JNLP file are not signed by same certificate”.

One possible solution is to use the extensions mechanism built into JNLP. See here and there for more details.

The other is to create your own certificate and use it to sign all jars.

– Install the NetBeans keystore plugin, part of the mobility pack module (Tools->Plugins-> pick the mobility module and restart NB).

What is it
As the name implies – it’s a mock framework.
ie. allows for the creation of mock objects, to be used in place of “real” objects (often external dependencies such as databases, JMS servers..) when unit testing.

What’s good about it
The main features are listed on the Mockito website.
Two features stand out:
– ability to mock classes as well as interfaces
– lean API which makes for a shorter learning curve and more readable code when compared with existing mock frameworks such as easyMock, JMockit…

class BusinessLogicTest {
public void testExecute(){
//Initialise mock system
ExternalSystem mockSystem = mock (ExternalSystem.class)
//make sure the fetch method of the mocked ExternalSystem will return "ABC"
//(only needed if that value is critical to the test)
stub(mockSystem.fetch()).toReturn("ABC");
//exercise the class under test
//will call the mocked system initialized above
new BusinessLogic().execute();
//check mock system has been called once
verify (mockSystem, times(1)).fetch()
}
}

This post showed how multiple threads synchronizing on two resources in different order usually leads to a deadlock. An obvious solution is to arrange for all threads to acquire locks in the same order.

Alternatively the synchronized blocks can be replaced by reentrant locks. The tryLock method of the ReentrantLock class is used to acquire a lock, returning immediatly if that lock is held by another thread (and not blocking forever as would be the case when using synchronized).

Example follows – livelock issues not dealt with at this point (ie. both threads, whilst not being blocked, may still not get any job done as they keep colliding when trying to acquire locks).

– the amount of code for each sql statement is consequent and often subject to copy-and-paste.
– as always the more code lines written, the bigger the probabilities of generating bugs: in particular it’s not rare for connections to leak (opened and never closed) or for queries parameters to not be associated with the correct values.

2) in-house ORM framework

An in-house framework aims at solving the issues found in stage 1). Typically it performs the following:

+ hides low-level bug-prone JDBC code behind a custom facade.
+ externalise SQL in configuration files, which allow modification of the SQL without recompiling the code and makes for a clearer java code.

– learning curve can be significant as in-house documentation is rarely up to date (when it exists at all)
– the effort required to maintain and document the framework, and keep it up to date with new releases of drivers/ databases / JDBC api is consequent.

3) Ibatis

Ibatis is an open source ORM framework which automatically maps java objects to sql queries using xml
configuration files.

– steep learning curve
– when using HQL: loss of control over how SQL queries are generated

Note that there’s no hierarchy of values here: Hibernate is not always preferable to raw JDBC (especially on very small projects)
It’s all depending on the context (although I find that Ibatis strikes a nice balance between Hibernate and pure-JDBC code).