The basic problem is the inability of generics and arrays to play well together. Generics were implemented to make them backwards compatible with earlier code. This meant that they were implemented using erasure. That is, any type of information that was available at compile-time was removed at run-time. This data is referred to as non-reifiable. Arrays are reified. Information about an array’s element type is retained and can be used at run-time.Whenever you invoke a varargs method, an array is created to hold the varargs parameters. If the element type of this array is not reifiable, it is likely to generate a compiler warning. On the other hand, it is not permitted to create an array whose component type is a concrete parameterized type, because it is not type safe.

Here is a simple example that shows generic array creation.

// Generic array creation is illegal – won’t compile

List<List<String>> numbersInThreeLanguages = Arrays

.asList(new List<String>[] {

Arrays.asList(“Un”, “Deux”, “Trois”),

Arrays.asList(“Uno”, “Dos”, “Tres”),

Arrays.asList(“One”, “Two”, “Three”) });

The legal code would look like, but this would generate an unchecked warning:

The warning can be suppressed at the call site as this is considered a safe invocation:

@SuppressWarnings(value = “unchecked”)

List<List<String>> numbersInThreeLanguages = Arrays.asList(

Arrays.asList(“Un”, “Deux”, “Trois”),

Arrays.asList(“Uno”, “Dos”, “Tres”),

Arrays.asList(“One”, “Two”, “Three”));

These unchecked warnings issued when invoking varargs methods that are tagged as candidates for inducing heap pollution are considered harmless. Some of the widely used helper methods in JDK library produce such warnings which are useless and requires the client code to explicitly suppress them at the call site, if you hate them.

· public static List Arrays.asList(T… a)

· public static boolean Collections.addAll(Collection c, T… elements)

· public static > EnumSet EnumSet.of(E first, E… rest)

The Coin varargs improvement adds a new @Documented annotation type, java.lang.SafeVararags, that can be used at the declaration site to suppress unchecked warnings for certain varargs invocations which are considered safe such as the ones listed above.

When the JDK libraries are retrofitted to take advantage of this new annotation type, client code can safely remove any SuppressWarnings annotation from the call site. For example,Arrays.asList() would carry this special annotation that reduces these warnings at the call site.

In addition to this annotation type, a new mandatory compiler warning “Possible heap pollution from parameterized vararg type : {0}” is generated on declaration sites of problematic varargs methods that are able to induce contributory heap pollution. To illustrate this, let us look at code below.

// [unchecked] Possible heap pollution from parameterized vararg

public static <T> void displayElements(T… array) {

for (T element : array) {

System.out.println(element.getClass().getName() + “: ” + element);

}

}

Since we used a method with variable number of generic arguments, a run-time problem can occur known as heap pollution. Heap pollution occurs when a variable of a parameterized type is assigned a different type than that used to define it. At run-time, this will manifest itself as an unchecked warning. At run-time, it will result in a java.lang.ClassCastException. Use the @SafeVarargs annotation to designate a method as one that avoids heap pollution.

Methods that use a variable number of generic arguments will result in a compile-time warning. However, not all methods that use a variable number of generic arguments will result in a run-time exception. The @SafeVarargs is used to mark the safe methods as safe. If it is possible for a run-time exception to occur, then the annotation should not be used.

1.2 Class Loaders

Multithreaded class loading

The function of a java.lang.ClassLoader is to locate the bytecode for a particular class, then transform that bytecode into a usable class in the runtime system. Prior to JDK 7, certain types of custom class loaders were prone to deadlock, when they used a cyclic delegation model. In JDK 7, the locking mechanism has been modified to avoid deadlock.

Deadlock scenario

Consider the following scenario. Thread1 tries to use a ClassLoader1 (locking ClassLoader1) to load class1. It then delegates the loading of class2 to ClassLoader2. At the same time, Thread2 uses ClassLoader2 (locking ClassLoader2) to load class3, and then delegates the loading of class4 to ClassLoader1. Since both class loaders are locked and both the threads need both loaders, a deadlock situation occurs.

Class Loader Synchronization in the Java SE 7 Release

The Java SE 7 release includes the concept of a parallel capable class loader. Loading a class by a parallel capable class loader now synchronizes on the pair consisting of the class loader and the class name. They are required to register themselves during their initialization process using the registerAsParallelCapable method.

If the custom class loader uses an acyclic hierarchal delegation model, no changes are needed in Java. In a hierarchal delegation model, delegation is first made to its parent classloader. Class loaders that do not use the hierarchical delegation model should be constructed as parallel capable class loaders in Java.

To create new custom class loaders, the process is similar in the Java SE 7 release as in previous releases. Create a subclass of ClassLoader, then override the findClass() method and possibly loadClass(). Overriding loadClass() makes your life more difficult, but it is the only way to use a different delegation model.

If you have a custom class loader with a risk of deadlocking, with the Java SE 7 release, you can avoid deadlocks by following these rules:

3. Check that all class loader classes that this custom class loader extends also invoke the registerAsParallelCapable() method in their class initializers. Ensure that they are multithread safe for concurrent class loading.

If your custom class loader overrides only findClass(String), you do not need further changes. This is the recommended mechanism to create a custom class loader.

If your custom class loader overrides either the protected loadClass(String, boolean) method or the public loadClass(String) method, you must also ensure that the protected defineClass() method is called only once for each class loader and class name pair.

The join/fork framework is an approach that supports breaking a problem into smaller and smaller pieces, solving them in parallel, and then combining the results. The new java.util.concurrent.ForkJoinPool class supports this approach. It is designed to work with multi-core systems, ideally with dozens or hundreds of processors.

The ForkJoinPool class is derived from the java.util.concurrent.AbstractExecutorService making it an ExecutorService. It is designed to work with ForkJoinTasks, though it can be used with normal threads. The ForkJoinPool class differs from other executors, in that its threads attempt to find and execute subtasks created by other currently running tasks. This is called work-stealing.

The ForkJoinPool class can be used for problems where the computation on the sub problems is either modified or returns a value. When a value is returned, a java.util.concurrent.RecursiveTask derived class is used. Otherwise, the java.util.concurrent.RecursiveAction class is used.

Your code should look similar to this:

if (my portion of the work is small enough)

do the work directly

else

split my work into two pieces

invoke the two pieces and wait for the results

Suppose you want to compute the sum of squares of the integers in the numbers array.

privatestaticclassSumOfSquaresTaskextends RecursiveTask<Long> {

privatefinalint thresholdValue = 1000;

privateint from;

privateint to;

public SumOfSquaresTask(int from, int to) {

this.from = from;

this.to = to;

}

@Override

protected Long compute() {

long sum = 0L;

int mid = (to + from) / 2;

if ((to – from) < thresholdValue) {

for (int i = from; i < to; i++) {

sum += numbers[i] * numbers[i];

}

return sum;

} else {

List<RecursiveTask<Long>> forks = new ArrayList<>();

SumOfSquaresTask task1 = new SumOfSquaresTask(from, mid);

SumOfSquaresTask task2 = new SumOfSquaresTask(mid, to);

forks.add(task1);

task1.fork();

forks.add(task2);

task2.fork();

for (RecursiveTask<Long> task : forks) {

sum += task.join();

}

return sum;

}

}

}

Create a task that represents all of the work to be done.

SumOfSquaresTask fb = new SumOfSquaresTask(0,numbers.length);

Create the ForkJoinPool that will run the task.

ForkJoinPool pool = new ForkJoinPool();

Run the task.

long result = pool.invoke(fb);

The ForkJoinPool class has several methods that report on the state of the pool, including:

Ø getPoolSize: This method returns the number of threads that are started but are not completed

Ø getRunningThreadCount: This method returns an estimate of the number of threads that are not blocked but are waiting to join other tasks

Ø getActiveThreadCount: This method returns an estimate of the number of threads executing tasks.

Supporting multiple threads using the ThreadLocalRandom class

The java.util.concurrent package has a new class, ThreadLocalRandom, which supports functionality similar to the Random class. However, the use of this new class, with multiple threads, will result in less contention and better performance as compared to their use with the Random class. When multiple threads need to use random numbers, the ThreadLocalRandom class should be used. The random number generator is local to the current thread.

Usages of this class should typically be of the form:

ThreadLocalRandom.current().nextX(…) (where X is Int, Long, etc)

When all usages are of this form, it is never possible to accidently share a ThreadLocalRandom across multiple threads.

The methods of this class return uniformly distributed numbers. The following table summarizes its methods:

The java.util.concurrent.Phaser class is concerned with the synchronization of threads that work together in cyclic type phases. The threads will execute and then wait for the completion of the other threads in the group. When all of the threads are completed, one phase is done. The Phaser can then be used to coordinate the execution of the same set of threads again.

Unlike the case for other barriers, the number of parties registered to synchronize on a phaser may vary over time.

Example:

A Phaser may be used instead of a CountDownLatch to control a one-shot action serving a variable number of parties. The typical idiom is for the method setting this up to first register, then start the actions, then deregister, as in:

void runTasks(List<Runnable> tasks) {

final Phaser phaser = new Phaser(1); // “1” to register self

// create and start threads

for (final Runnable task : tasks) {

phaser.register();

new Thread() {

public void run() {

phaser.arriveAndAwaitAdvance(); // await all creation

task.run();

}

}.start();

}

// allow threads to start and deregister self

phaser.arriveAndDeregister();

}

One way to cause a set of threads to repeatedly perform actions for a given number of iterations is to override onAdvance:

void startTasks(List<Runnable> tasks, final int iterations) {

final Phaser phaser = new Phaser() {

protected boolean onAdvance(int phase, int registeredParties) {

return phase >= iterations || registeredParties == 0;

}

};

phaser.register();

for (final Runnable task : tasks) {

phaser.register();

new Thread() {

public void run() {

do {

task.run();

phaser.arriveAndAwaitAdvance();

} while (!phaser.isTerminated());

}

}.start();

}

phaser.arriveAndDeregister(); // deregister self, don’t wait

}

If the main task must later await termination, it may re-register and then execute a similar loop:

// …

phaser.register();

while (!phaser.isTerminated())

phaser.arriveAndAwaitAdvance();

Related constructions may be used to await particular phase numbers in contexts where you are sure that the phase will never wrap around Integer.MAX_VALUE. For example:

void awaitPhase(Phaser phaser, int phase) {

int p = phaser.register(); // assumes caller not already registered

while (p < phase) {

if (phaser.isTerminated())

// … deal with unexpected termination

else

p = phaser.arriveAndAwaitAdvance();

}

phaser.arriveAndDeregister();

}

To create a set of n tasks using a tree of phasers, you could use code of the following form, assuming a Task class with a constructor accepting a Phaser that it registers with upon construction. After invocation of build(new Task[n], 0, n, new Phaser()), these tasks could then be started, for example by submitting to a pool:

void build(Task[] tasks, int lo, int hi, Phaser ph) {

if (hi – lo > TASKS_PER_PHASER) {

for (int i = lo; i < hi; i += TASKS_PER_PHASER) {

int j = Math.min(i + TASKS_PER_PHASER, hi);

build(tasks, i, j, new Phaser(ph));

}

} else {

for (int i = lo; i < hi; ++i)

tasks[i] = new Task(ph);

// assumes new Task(ph) performs ph.register()

}

}

The best value of TASKS_PER_PHASER depends mainly on expected synchronization rates. A value as low as four may be appropriate for extremely small per-phase task bodies (thus high rates) or up to hundreds for extremely large ones.

Implementation notes: This implementation restricts the maximum number of parties to 65535. Attempts to register additional parties result in IllegalStateException. However, you can and should create tiered phasers to accommodate arbitrarily large sets of participants.