Are You Recruiter/Hiring Manager

*Filter candidates using TestYourCandidate.com and save 80% time ( saving time for recruiters and interviewers)
*More than 50% candidates can be filtered on Candidate Screening online tests
*Inbuilt exams are available in library, created and tested by experts Try Today...

Upgrde to OCPJP 7 : Concurrency

Use java.util.concurrent collections

The java.util.concurrent package includes a number of additions to the Java Collections
Framework. These are most easily categorized by the collection interfaces provided:

BlockingQueue defines a first-in-first-out data structure that blocks or
times out when you attempt to add to a full queue, or retrieve from an empty queue.

ConcurrentMap is a subinterface of java.util.Map that defines
useful atomic operations. These operations remove or replace a key-value pair only if the
key is present, or add a key-value pair only if the key is absent. Making these operations
atomic helps avoid synchronization. The standard general-purpose implementation of
ConcurrentMap is ConcurrentHashMap, which is a concurrent
analog of HashMap.

ConcurrentNavigableMap is a subinterface of ConcurrentMap
that supports approximate matches. The standard general-purpose implementation of
ConcurrentNavigableMap is ConcurrentSkipListMap, which is
a concurrent analog of TreeMap.

Special-purpose List and Set implementations are provided for use in situations where read operations
vastly outnumber write operations and iteration cannot or should not be synchronized:

CopyOnWriteArrayList - a List implementation backed by an array.

This is a thread-safe variant of ArrayList in which all mutative operations (add, set, and
so on) are implemented by making a fresh copy of the underlying array.

This is ordinarily too costly, but may be more efficient than alternatives when traversal operations vastly outnumber mutations,
and is useful when you cannot or don't want to synchronize traversals, yet need to preclude interference among concurrent threads.
The "snapshot" style iterator method uses a reference to the state of the array at the point that the iterator was created. This
array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to
throw ConcurrentModificationException. The iterator will NOT reflect additions, removals, or changes to the list
since the iterator was created. Element-changing operations on iterators themselves (remove, set, and
add) are NOT supported. These methods throw UnsupportedOperationException.

CopyOnWriteArraySet - a Set implementation backed by a copy-on-write array. This implementation is similar
in nature to CopyOnWriteArrayList. Unlike most Set implementations, the add, remove,
and contains methods require time proportional to the size of the set. This implementation is well-suited to maintaining
event-handler lists that must prevent duplicates.

All of these collections help avoid Memory Consistency Errors by defining a happens-before relationship
between an operation that adds an object to the collection with subsequent operations that access or
remove that object.

Use atomic variables and locks

Compare and swap (CAS)

The first processors that supported concurrency provided atomic test-and-set operations, which generally operated on
a single bit. The most common approach taken by current processors, including Intel and Sparc processors, is to implement
a primitive called compare-and-swap, or CAS. (On Intel processors, compare-and-swap is implemented by the cmpxchg family
of instructions. PowerPC processors have a pair of instructions called "load and reserve" and "store conditional" that
accomplish the same goal; similar for MIPS, except the first is called "load linked.")

A CAS operation includes three operands - a memory location (V), the expected old value (A), and a new value (B). The
processor will atomically update the location to the new value if the value that is there matches the expected old value,
otherwise it will do nothing. In either case, it returns the value that was at that location prior to the CAS instruction.
(Some flavors of CAS will instead simply return whether or not the CAS succeeded, rather than fetching the current value.)
CAS effectively says "I think location V should have the value A; if it does, put B in it, otherwise, don't change it but
tell me what value is there now."

The natural way to use CAS for synchronization is to read a value A from an address V, perform a multistep computation to
derive a new value B, and then use CAS to change the value of V from A to B. The CAS succeeds if the value at V has not
been changed in the meantime.

Instructions like CAS allow an algorithm to execute a read-modify-write sequence without fear of another thread modifying
the variable in the meantime, because if another thread did modify the variable, the CAS would detect it (and fail) and the
algorithm could retry the operation. Listing below illustrates the behavior (but not performance characteristics) of
the CAS operation, but the value of CAS is that it is implemented in hardware and is extremely lightweight (on most processors):

Concurrent algorithms based on CAS are called lock-free, because threads do not ever have to wait for a lock (sometimes called
a mutex or critical section, depending on the terminology of your threading platform). Either the CAS operation succeeds or it
doesn't, but in either case, it completes in a predictable amount of time. If the CAS fails, the caller can retry the CAS operation
or take other action as it sees fit. Listing below shows the counter class written to use CAS:

An algorithm is said to be wait-free if every thread will continue to make progress in the face of
arbitrary delay (or even failure) of other threads. By contrast, a lock-free algorithm requires only
that some thread always make progress.

Nonblocking algorithms are used extensively at the operating system and JVM level for tasks such as thread
and process scheduling. While they are more complicated to implement, they have a number of advantages over
lock-based alternatives - hazards like priority inversion and deadlock are avoided, contention is less
expensive, and coordination occurs at a finer level of granularity, enabling a higher degree of parallelism.

Until Java SE 5, it was not possible to write wait-free, lock-free algorithms in the Java language without
using native code. With the addition of the atomic variables classes in the java.util.concurrent.atomic
package, that has changed. The atomic variable classes all expose a compare-and-set primitive (similar to
compare-and-swap), which is implemented using the fastest native construct available on the platform
(compare-and-swap, load linked/store conditional, or, in the worst case, spin locks). Nine flavors of
atomic variables are provided in the java.util.concurrent.atomic package (AtomicInteger;
AtomicLong; AtomicReference; AtomicBoolean; array forms of atomic integer;
long; reference; and atomic marked reference and stamped reference classes, which atomically update a pair of values).

The atomic variable classes can be thought of as a generalization of volatile variables, extending the concept of
volatile variables to support atomic conditional compare-and-set updates. Reads and writes of atomic variables have
the same memory semantics as read and write access to volatile variables.

Operations on atomic variables get turned into the hardware primitives that the platform provides for concurrent
access, such as compare-and-swap.

Java Atomic Operations

In Java the integer primitive increment operation is not an atomic operation. When an integer is incremented, the
following logical steps are performed by the JVM:

Retrieve the value of the integer from memory

Increment the value

Assign the newly incremented value back to the appropriate memory location

Return the value to the caller

So while we write the increment operator in a single line of Java code, such as:

int n = i++;

Each one of the aforementioned steps occurs in the JVM. The danger is that if you have multiple threads that all try
to increment the same value, there is a chance that two or more of the threads will get the same value (step #1), then
increment it (step #2), and then assign the new value to it (step #3). If two threads increment the number 5 then you
would expect to see 7 but instead both increment the number 5, yielding a result of 6, and then assign 6 back to the
integer's memory location.

With the release of Java SE 5, Sun included a java.util.concurrent.atomic package that addresses this
limitation. And specifically they added classes including the following:

AtomicBoolean - A boolean value that may be updated atomically. An AtomicBoolean
is used in applications such as atomically updated flags, and cannot be used as a replacement for a Boolean.

AtomicInteger - An int value that may be updated atomically. An AtomicInteger is
used in applications such as atomically incremented counters, and cannot be used as a replacement for an Integer.
However, this class does extend Number to allow uniform access by tools and utilities that deal with
numerically-based classes.

AtomicIntegerArray - An int array in which elements may be updated atomically.

AtomicLong - A long value that may be updated atomically. An AtomicLong is used
in applications such as atomically incremented sequence numbers, and cannot be used as a replacement for a Long.
However, this class does extend Number to allow uniform access by tools and utilities that deal with
numerically-based classes.

AtomicLongArray - A long array in which elements may be updated atomically.

Each of these atomic classes provides methods to perform common operations, but each one is ensured to be performed as a single atomic
operation. For example, rather than incrementing an integer using the standard increment operator, like the following:

int n = ++i;

You can ensure that the (1) get value, (2) increment value, (3) update memory, and (4) assign the new value to n is
all accomplished without fear of another thread interrupting your operation by writing you code as follows:

In addition, the AtomicInteger class provides the following operations:

int addAndGet(int delta) - Add delta to the integer and then return the new value.

int decrementAndGet() - Decrement the integer and return its value.

int getAndAdd(int delta) - Return the value and then add delta to it.

int getAndDecrement() - Return the value and then decrement.

int getAndIncrement() - Return the value and then increment it.

int getAndSet(int newValue) - Return the value and then set it to the newValue

Locks in Java

A lock is a thread synchronization mechanism like synchronized blocks except locks can be more sophisticated
than Java's synchronized blocks.

From Java SE 5 the package java.util.concurrent.locks contains several lock implementations, so you may not have
to implement your own locks:

The Lock interface supports locking disciplines that differ in semantics (reentrant, fair, etc),
and that can be used in non-block-structured contexts including hand-over-hand and lock reordering algorithms.
The main implementation is ReentrantLock.

The ReadWriteLock interface similarly defines locks that may be shared among readers but are exclusive to
writers. Only a single implementation, ReentrantReadWriteLock, is provided, since it covers most standard
usage contexts. But programmers may create their own implementations to cover nonstandard requirements.

The Condition interface describes condition variables that may be associated with Locks. These
are similar in usage to the implicit monitors accessed using Object.wait(), but offer extended capabilities.
In particular, multiple Condition objects may be associated with a single Lock. To avoid
compatibility issues, the names of Condition methods are different than the corresponding Object
versions.

The AbstractQueuedSynchronizer class serves as a useful superclass for defining locks and other synchronizers that
rely on queuing blocked threads. The LockSupport class provides lower-level blocking and unblocking support that is
useful for those developers implementing their own customized lock classes.

Notice the synchronized(this) block in the inc() method. This block makes sure that only one thread can execute the
return ++count at a time.

The purpose of the synchronized keyword is to provide the ability to allow serialized entrance to synchronized
methods in an object. Although almost all the needs of data protection can be accomplished with this keyword, it is too
primitive when the need for complex synchronization arises. More complex cases can be handled by using classes that achieve
similar functionality as the synchronized keyword. These classes are available beginning in Java SE 5.

The synchronization tools in Java SE 5 implement a common interface: the Lock interface. For now, the two methods
of this interface that are important to us are lock() and unlock(). Using the Lock interface
is similar to using the synchronized keyword: we call the lock() method at the start of the method and
call the unlock() method at the end of the method, and we have effectively synchronized the method.

The lock() method grabs the lock. The difference is that the lock can now be more easily envisioned: we now have an actual
object that represents the lock. This object can be stored, passed around, and even discarded. As before, if another thread owns the
lock, a thread that attempts to acquire the lock waits until the other thread calls the unlock() method of the lock. Once
that happens, the waiting thread grabs the lock and returns from the lock() method. If another thread then wants the
lock, it has to wait until the current thread calls the unlock() method.

The Counter class could have been written using a Lock instead of a synchronized block:

Use Executors and ThreadPools

In large-scale applications, it makes sense to separate thread management and creation from the rest of the application. Objects that
encapsulate these functions are known as executors.

The java.util.concurrent package defines three executor interfaces:

Executor - a simple interface that supports launching new tasks:

public interface Executor {
/**
* Executes the given command at some time in the future. The command
* may execute in a new thread, in a pooled thread, or in the calling
* thread.
*/
void execute(Runnable command);
}

The Executor interface provides a single method, execute(..), designed to be a drop-in replacement for a common
thread-creation idiom. If r is a Runnable object, and e is an Executor object you
can replace:

(new Thread(r)).start();

with

e.execute(r);

However, the definition of execute(...) is less specific. The low-level idiom creates a new thread and launches
it immediately. Depending on the Executor implementation, execute may do the same thing, but is more
likely to use an existing worker thread to run r, or to place r in a queue to wait for a worker thread
to become available.

The executor implementations in java.util.concurrent are designed to make full use of the more advanced
ExecutorService and ScheduledExecutorService interfaces, although they also work with the base
Executor interface.

ExecutorService - a subinterface of Executor, which adds features that help manage the lifecycle,
both of the individual tasks and of the executor itself:

The ExecutorService interface supplements execute(...) with a similar, but more versatile
submit(...) method. Like execute(...), submit(...) accepts Runnable objects,
but also accepts Callable objects, which allow the task to return a value:

The submit(...) method returns a Future
object, which is used to retrieve the Callable return value and to manage the status of both Callable and
Runnable tasks.

ExecutorService also provides methods for submitting large collections of Callable objects. Finally,
ExecutorService provides a number of methods for managing the shutdown of the executor. To support immediate shutdown,
tasks should handle interrupts correctly.

The ScheduledExecutorService interface supplements the methods of its parent ExecutorService with schedule,
which executes a Runnable or Callable task after a specified delay. In addition, the interface defines
scheduleAtFixedRate and scheduleWithFixedDelay, which executes specified tasks repeatedly, at defined
intervals.

Use the parallel Fork/Join framework

New in the Java SE 7 release, the fork/join framework is an implementation of the ExecutorService interface that helps you take advantage of
multiple processors. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to
enhance the performance of your application.

As with any ExecutorService, the fork/join framework distributes tasks to worker threads in a thread pool. The fork/join framework is distinct
because it uses a work-stealing algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy.

The center of the fork/join framework is the ForkJoinPool class, an extension of AbstractExecutorService. ForkJoinPool
implements the core work-stealing algorithm and can execute ForkJoinTasks.

Basic use

Using the fork/join framework is simple. The first step is to write some code that performs a segment of the work. Your code should look similar to
this:

if (my portion of the work is small enough)
do the work directly
else
split my work into two pieces
invoke the two pieces and wait for the results

Wrap this code as a ForkJoinTask subclass, typically as one of its more specialized types RecursiveTask (which can return a result)
or RecursiveAction.

After your ForkJoinTask is ready, create one that represents all the work to be done and pass it to the invoke() method of a
ForkJoinPool instance.

Parallelism support

The core Java 7 fork/join addition is a new ForkJoinPool executor that is dedicated to running instances implementing ForkJoinTask.
ForkJoinTask objects support the creation of subtasks plus waiting for the subtasks to complete. With those clear semantics, the executor is able
to dispatch tasks among its internal threads pool by "stealing" jobs when a task is waiting for another task to complete and there are pending tasks to be run.

ForkJoinTask objects feature two specific methods:

The fork() method allows a ForkJoinTask to be planned for asynchronous execution. This allows a new
ForkJoinTask to be launched from an existing one.

In turn, the join() method allows a ForkJoinTask to wait for the completion of another one.

Cooperation among tasks happens through fork() and join(), as illustrated in figure below. Note that the
fork() and join() method names should not be confused with their POSIX counterparts with which a process can
duplicate itself. There, fork() only schedules a new task within a ForkJoinPool, but no child Java Virtual
Machine is ever created.

Figure 4.1.
Cooperation Among Fork and Join Tasks

There are two types of ForkJoinTask specializations:

Instances of RecursiveAction represent executions that do not yield a return value.

In contrast, instances of RecursiveTask yield return values.

In general, RecursiveTask is preferred because most divide-and-conquer algorithms return a value from a computation over
a data set. For the execution of tasks, different synchronous and asynchronous options are provided, making it possible to implement
elaborate patterns.

The information you are posting should be related to java and ORACLE technology. Not political.