A singleton class should be designed to ensures that there exists only one instance per application. Special care must be taken if your application is deployed on a clustered environment as in this case it is possible that multiple instance of your singleton class are available in your application.

Here are a few ways to create a thread safe singleton classes in your application.

Here the instance is being created on demand and is also thread safe. The use of inner classes helps in the sense that the very first time singleton object is requested, the inner class is loaded by line1 and this loading of inner class causes the static member variable to be created and returned. Next time, the singleton member variable is requested causes the same static reference variable to be returned.

I thought it would be useful to have quick access to
entire java source code and for those open source projects that you don't have the source for while developing our java based applications. I came across
this awesome link while searching for java source code online and now I visit it almost everyday. You can browse fully cross-references java source code from Maven repository just like you do in your IDE. I found it very useful and recommend it to every Java developer.

Don’t try to handle coding errors.

Unless your software is required to take extraordinary measures in error scenarios, don’t spend a lot of time designing it to detect and recover from programming errors. In the case of an out-of-bounds array index, divide-by zero error, or any other programming error, the best strategy is to fail fast (and leave an audit trail of the problem that can be used to troubleshoot it).

Avoid declaring lots of exception classes.

Create a new exception class only when you expect some handling of the code to take a significantly different action, based on the exception type. In my experience it is rarely the case and exception classes available in java API serve the purpose.

Don’t let implementation details leak out of a method invocation as exceptions. Otherwise, your users might think your software is broken. When low-level exceptions percolate up to a high-level handler, there’s little context to assist the handler in making informed decisions or reporting conditions that are traceable to any obvious cause. Recasting an exception whenever you cross an abstraction boundary enables exception handlers higher up in the call chain to make more informed decisions. If you want to include a problem trace when recasting them, you can always create a chained exception. A chained exception provides added context and holds a reference to the original lower level exception. You can repeatedly chain exceptions.

Provide context along with an exception.

What’s most important in exception handling is information that helps create an informed response. Exception classes hold information. You can design them to be packed with information in addition to the bare-bones stack trace information provided by default. You might include values of parameters that raised the exception, specific error text, or detailed information that could be useful to plan a recovery. When an exception occurs, it is important that all pertinent data be
passed to the exception's constructor. Such data is often critical for
understanding and solving the problem, and can greatly reduce the time
needed to find a solution.

The this reference is sometimes useful for this purpose, since toString is
implicitly called. In addition, if you are defining exception classes
yourself, you may even design your constructors to force the caller to
pass the pertinent data.

Uninformative stack traces are very
frustrating for the maintainer, and often inspire even the most
well-tempered programmers to temporarily violate local community
standards for obscenity.

Handle exceptions as close to the problem as you can.

As a first line of defense, consider the initial requestor. If the caller knows enough to perform a corrective action, you can rectify the condition on the spot. If you propagate an exception far away from the source, it can be difficult to trace the source. Often objects further away from the problem can’t make meaningful decisions.

Use exceptions only to signal emergencies.

Exceptions shouldn’t be raised to indicate normal branching conditions that will alter the flow in the calling code. For example, a find operation may return zero, one, or many objects, so I wouldn’t raise an exception in this case. Instead, I’d design my find() method to return a null object or an empty collection. A dropped database connection, on the other hand, is a real emergency. There’s nothing that can be done to continue as planned.

Don’t repeatedly re-throw the same exception.

Although exceptions don’t cost anything until they’re raised, programs that frequently raise exceptions run more slowly.

Avoid empty catch blocks

It is usually a very bad idea to have an empty catch block because when the exception occurs, nothing happens, and the program fails for unknown reasons.

In general, when a exception occurs, it can be thrown up to the caller, or it can be caught in a catch block. When catching an exception, some options include :

Inform the user (strongly recommended)

Log the problem, using the application specific loggers or JDK logging services, or similar tool

Send an email describing the problem to an administrator

Deciding what exactly to do seems to depend on the nature of the problem. If there is an actual bug in the program - a defect that needs to be fixed - then one might do all three of the above. In this case, the end user should likely be shown a generic "Sorry, we goofed" message, not a stack trace. It is usually considered bad form to display a stack trace to a non-technical end user, or if exposing a stack trace may be a security risk.

If the exception does not represent a bug, then different behavior may be appropriate. For example, if a problem with user input is detected and an exception is thrown as a result, then merely informing the user of the problem might be all that is required.

Exception translation

Occasionally, it is appropriate to translate one type of exception into another.

The data layer, for example, can profit from this technique. Here, the data layer seeks to hide almost all of its implementation details from other parts of the program. It even seeks to hide the basic persistence mechanism - whether or not a database or an ad hoc file scheme is used, for example.

However, every persistence style has specific exceptions - SQLException for databases, and IOException for files, for example. If the rest of the program is to remain truly ignorant of the persistence mechanism, then these exceptions cannot be allowed to propagate outside the data layer, and must be translated into some higher level abstraction - DataAccessException, say.

Use template for repeated try-catch

Java's try-catch blocks are particularly common when using APIs which give an important role to checked exceptions (such as SQLException in JDBC). When using such an API, many published examples simply repeat the same try-catch code structure whenever necessary. However, it is simple to eliminate such code repetition using the template method pattern. The idea is to define the structure of the try-catch block in one place, in an abstract base class (ABC). Such an ABC is then extended whenever that particular try-catch block is needed. The concrete implementation of such an ABC will often have simple, "straight line" code.

Understanding compareTo

The compareTo() method is the sole member of Comparable interface. It provides a means of fully ordering objects. For a concrete comparable implementation class to work well, the compareTo() implementation needs to satisfy the certain conditions.

Anti Commutation : x.compareTo(y) is the opposite sign of y.compareTo(x)

if x.compareTo(y) > 0 and y.compareTo(z) > 0, then x.compareTo(z) > 0 (and same for less than)

if x.compareTo(y)==0, then x.compareTo(z) has the same sign as y.compareTo(z)

consistency with equals : It is highly recommended, but not required : x.compareTo(y) == 0, if and only if x.equals(y) ; consistency with equals is required for ensuring sorted collections (such as TreeSet) are well-behaved.

Things to remember while implementing compareTo()

Compare the various types of fields as follows :

Numeric primitive : use < and >. There is an exception to this rule: float and double primitives should be compared using Float.compare(float, float) and Double.compare(double, double). This avoids problems associated with special border values.

collection or array : Comparable does not seem to be intended for these kinds of fields. For example, List, Map and Set do not implement Comparable. As well, some collections have no definite order of iteration, so doing an element-by-element comparison cannot be meaningful in those cases.

Comparable implementations in JDK

All primitive wrapper classes like Integer, Long, Float, Double, Boolean and many more implement Comparable.

Hot tips

One can greatly increase the performance of compareTo by comparing first on items which are most likely to differ.

Avoid instanceof in methods that override or implement Object.equals(), Comparable.compareTo()

If the task is to perform a sort of items which are stored in a relational database, then it is usually much preferred to let the database perform the sort using the ORDER BY clause, rather than in code.

An alternative to implementing Comparable is passing Comparator objects as parameters. Be aware that if a Comparator compares only one of several significant fields, then the Comparator is very likely not synchronized with equals.

When a class extends a concrete Comparable class and adds a significant field, a correct implementation of compareTo cannot be constructed. The only alternative is to use composition instead of inheritance. (A similar situation holds true for equals. See Effective Java for more information.)

Sanjeev Kumarsanjeevonline@gmail.comimplementingcomparableandunderstandingcompareto2https://sites.google.com/a/javagyan.com/javagyan/useful-tips/implementingcomparableandunderstandingcomparetohttps://sites.google.com/feeds/content/javagyan.com/javagyan/42550634529420675522011-08-23T10:41:36.772Z2012-04-04T18:01:10.952Z2012-04-04T18:01:09.722ZChoosing the right Collection

Here is a quick guide for selecting the proper implementation of a
Set,
List,
or Map in your application.

The best general purpose or 'primary' implementations are likely ArrayList, LinkedHashMap, and
LinkedHashSet. Their overall performance is better, and you should use them
unless you need a special feature provided by another implementation. That
special feature is usually ordering or sorting.

Here, "ordering" refers to the order of items returned by an Iterator,
and "sorting" refers to sorting items according to Comparable
or Comparator.

About PMD

How do you ensure that your code follows standard programming principles? For most Java development projects going on these days the answer would be to use "PMD". It is aiming towards becoming a de-facto tool for analyzing the source code and is being used by more and more Java applications everyday.

Note: There are a lot many tools that PMD competes with i.e Checkstyle,
FindBugs, Hammurapi, Soot, Squale etc. However exploring capabilities of these
tools(other than PMD) are out of scope of this article.

PMD is a static rule set based Java source code analyzer that identifies potential problems in the code like:

Possible bugs - empty try/catch/finally/switch statements

Dead code - unused local variables, parameters and private methods

Suboptimal code - wasteful String/StringBuffer usage

Overcomplicated expressions - unnecessary if statements, for loops that could be while loops

A warning appear telling the feature is not signed. Ignore and click Install to continue.

Accept to restart the workbench to load PMD into the workbench.

Eclipse is restarted and a PMD welcome page is displayed : the plugin is correctly installed.

Writing custom rules (suggested by David Karr)

It is interesting and important to note that you can write your custom rules. Writing PMD rules is cool because you don't have to wait for PMD team to get around to implementing feature requests. Following are the two approaches to write custom rules.

Following are some tips that shall help you in avoiding potential issues and for being a little more productive while working with eclipse.

Avoid installation problems

Never install a new version of Eclipse on top of an older version. Rename the old one first to move it out of the way, and let the new version be unpacked in a clean directory.

Recovering your messed up workspace

Corrupted workspace is a common occurrence and troublemaker for many developers. So If your Eclipse installation has startup errors or a corrupted configuration, it might be time to get a fresh start. Start Eclipse with the –clean option, and all cached framework and runtime data will be cleared out. This often helps fix plug-in issues and improve general stability.

Increase the memory allocation

With new plugins getting added to the core eclipse functionality and the need to use additional third party plugins, the memory requirements for your eclipse workspace increases. The default memory allocation configured in eclipse is not enough for most J2ee development projects and that causes a sluggish response from you eclipse. If you get Out of Memory errors or sluggish response, you may have to increase the defaults that are set in eclipse.ini file in the Eclipse installation directory. In particular, if you get an error about “PermGen” memory (permanent generation), add this line at the end and restart Eclipse: -

XX:MaxPermSize=256m

Use the lowest memory settings that work and perform well for your mix of projects.

Side by side editing

By dragging editors, you can show two files side by side. You can also edit two portions of the same file by using the Window > New Editor command.

Automatic code improvements

Set up Eclipse to automatically format source code and organize imports on every save. Select Window > Preferences > Java Editor > Save Actions to enable these actions. This dialog also lets you configure actions like removing unnecessary casts or adding missing annotations. Configuring your eclipse with optimized Compiler, Formatter and CheckStyle settings is described in detail in the post Using Eclipse Effectively

Keyboard shortcuts

It is productive and convenient to use keyboard shortcuts for performing certain tasks in eclipse rather than looking for options to do the same in various navigation menus. For example looking up references of a variable, method or a class can be quickly achieved via shortcut Ctrl+Shift+g. For your reference I have included a list of the most important keyboard shortcuts in my post Eclipse Keyboard Shortcuts

Other related posts

Sanjeev Kumarsanjeevonline@gmail.comhottipsoneclipse5https://sites.google.com/a/javagyan.com/javagyan/useful-tips/hottipsoneclipsehttps://sites.google.com/feeds/content/javagyan.com/javagyan/45245965950869741982010-12-29T19:26:18.679Z2011-11-30T07:47:55.093Z2011-11-30T07:47:55.091ZPackage by feature, not layer

The first question in building an application is "How do I divide it up into packages?". For typical business applications, there seems to be two ways of answering this question.

Higher Modularity: As mentioned above, only package-by-feature has packages with high cohesion, high modularity, and low coupling between packages.

Easier Code Navigation: Maintenance programmers need to do a lot less searching for items, since all items needed for a given task are usually in the same directory. Some tools that encourage package-by-layer use package naming conventions to ease the problem of tedious code navigation. However, package-by-feature transcends the need for such conventions in the first place, by greatly reducing the need to navigate between directories.

Higher Level of Abstraction: Staying at a high level of abstraction is one of programming's guiding principles of lasting value. It makes it easier to think about a problem, and emphasizes fundamental services over implementation details. As a direct benefit of being at a high level of abstraction, the application becomes more self-documenting : the overall size of the application is communicated by the number of packages, and the basic features are communicated by the package names. The fundamental flaw with package-by-layer style, on the other hand, is that it puts implementation details ahead of high level abstractions - which is backwards.

Separates Both Features and Layers: The package-by-feature style still honors the idea of separating layers, but that separation is implemented using separate classes. The package-by-layer style, on the other hand, implements that separation using both separate classes and separate packages, which does not seem necessary or desirable.

Minimizes Scope: Minimizing scope is another guiding principle of lasting value. Here, package-by-feature allows some classes to decrease their scope from public to package-private. This is a significant change, and will help to minimize ripple effects. The package-by-layer style, on the other hand, effectively abandons package-private scope, and forces you to implement nearly all items as public. This is a fundamental flaw, since it doesn't allow you to minimize ripple effects by keeping secrets.

Better Growth Style: In the package-by-feature style, the number of classes within each package remains limited to the items related to a specific feature. If a package becomes too large, it may be refactored in a natural way into two or more packages. The package-by-layer style, on the other hand, is monolithic. As an application grows in size, the number of packages remains roughly the same, while the number of classes in each package will increase without bound.

I thought it would be useful for architects, designers and
developers to understand how Java has evolved since its inception so that they
are aware of what all capabilities they have access to when working with a
particular version of Java. Besides, it
is always good to have knowledge about evolution of the technologies that you
are working with.

It has been quite sometime that Java 7 got released with plenty of new features and enhancements that shall interest Java developer community. Following sections of this page cover some of these changes with examples.

Strings in switch statements

In the JDK 7 release, you can use a String object in the expression of a switch statement:

public String getTypeOfDayWithSwitchStatement(String dayOfWeekArg) {

String typeOfDay;

switch (dayOfWeekArg) {

case "Monday":

typeOfDay = "Start of work week";

break;

case "Tuesday":

case "Wednesday":

case "Thursday":

typeOfDay = "Midweek";

break;

case "Friday":

typeOfDay = "End of work week";

break;

case "Saturday":

case "Sunday":

typeOfDay = "Weekend";

break;

default:

throw new IllegalArgumentException("Invalid day of the week: " + dayOfWeekArg);

}

return typeOfDay;

}

The diamond operator "<>"

You can replace the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context. This pair of angle brackets is informally called the diamond.

In Java SE 7, you can substitute the parameterized type of the constructor with an empty set of type parameters (<>):

Map<String, List<String>> myMap = new HashMap<>();

Java SE 7 supports limited type inference for generic instance creation; you can only use type inference if the parameterized type of the constructor is obvious from the context. For example, the following example does not compile:

The catch clause specifies the types of exceptions that the block can handle, and each exception type is separated with a vertical bar (|). Some other advantages apart from syntactical improvement:

Bytecode generated by compiling a catch block that handles multiple exception types will be smaller (and thus superior) than compiling many catch blocks that handle only one exception type each.

A catch block that handles multiple exception types creates no duplication in the bytecode generated by the compiler; the bytecode has no replication of exception handlers.

Note: If a catch block handles more than one exception type, then the catch parameter is implicitly final. In this example, the catch parameter ex is final and therefore you cannot assign any values to it within the catch block.

Re-throwing exceptions with more inclusive type checking

The Java SE 7 compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration.

Consider the following example:

static class FirstException extends Exception { }

static class SecondException extends Exception { }

public void rethrowException(String exceptionName) throws Exception {

try {

if (exceptionName.equals("First")) {

throw new FirstException();

} else {

throw new SecondException();

}

} catch (Exception e) {

throw e;

}

}

This examples's try block could throw either FirstException or SecondException. Suppose you want to specify these exception types in the throws clause of the rethrowException method declaration. In releases prior to Java SE 7, you cannot do so. Because the exception parameter of the catch clause, e, is type Exception, and the catch block rethrows the exception parameter e, you can only specify the exception type Exception in the throws clause of the rethrowException method declaration.

However, in Java SE 7, you can specify the exception types FirstException and SecondException in the throws clause in the rethrowException method declaration. The Java SE 7 compiler can determine that the exception thrown by the statement throw e must have come from the try block, and the only exceptions thrown by the try block can be FirstException and SecondException. Even though the exception parameter of the catch clause, e, is type Exception, the compiler can determine that it is an instance of either FirstException or SecondException:

This analysis is disabled if the catch parameter is assigned to another value in the catch block. However, if the catch parameter is assigned to another value, you must specify the exception type Exception in the throws clause of the method declaration.

In detail, in Java SE 7 and later, when you declare one or more exception types in a catch clause, and rethrow the exception handled by this catch block, the compiler verifies that the type of the rethrown exception meets the following conditions:

The try block is able to throw it.

There are no other preceding catch blocks that can handle it.

It is a subtype or supertype of one of the catch clause's exception parameters.

The Java SE 7 compiler allows you to specify the exception types FirstException and SecondException in the throws clause in the rethrowException method declaration because you can rethrow an exception that is a supertype of any of the types declared in the throws.

In releases prior to Java SE 7, you cannot throw an exception that is a supertype of one of the catch clause's exception parameters. A compiler from a release prior to Java SE 7 generates the error, "unreported exception Exception; must be caught or declared to be thrown" at the statement throw e. The compiler checks if the type of the exception thrown is assignable to any of the types declared in the throws clause of the rethrowException method declaration. However, the type of the catch parameter e is Exception, which is a supertype, not a subtype, of FirstException andSecondException.

The try-with-resources statement

The try-with-resources statement is a try statement that declares one or more resources. A resource is as an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement. Any object that implements java.lang.AutoCloseable, which includes all objects which implement java.io.Closeable, can be used as a resource.

The following example reads the first line from a file. It uses an instance of BufferedReader to read data from the file. BufferedReader is a resource that must be closed after the program is finished with it:

// Java 7 Code

static String readFirstLineFromFile(String path) throws IOException {

try (BufferedReader br = new BufferedReader(new FileReader(path))) {

return br.readLine();

}//no finally block required as resources will be closed automatically

}

In this example, the resource declared in the try-with-resources statement is a BufferedReader. The declaration statement appears within parentheses immediately after the try keyword. The class BufferedReader, in Java SE 7 and later, implements the interface java.lang.AutoCloseable. Because the BufferedReader instance is declared in a try-with-resource statement, it will be closed regardless of whether the try statement completes normally or abruptly (as a result of the method BufferedReader.readLine throwing an IOException).

Prior to Java SE 7, you can use a finally block to ensure that a resource is closed regardless of whether the whether the try statement completes normally or abruptly. The following example uses a finally block instead of a try-with-resources statement:

You may declare one or more resources in a try-with-resources statement.

Note: A try-with-resources statement can still have catch and finally blocks just like an ordinary try statement. In a try-with-resources statement, any catch or finally block is run after the resources declared have been closed.

Numeric literals with underscores

In Java SE 7 and later, any number of underscore characters (_) can appear anywhere between digits in a numerical literal. This feature enables you, for example, to separate groups of digits in numeric literals, which can improve the readability of your code.

You can place underscores only between digits; you cannot place underscores in the following places:

At the beginning or end of a number

Adjacent to a decimal point in a floating point literal

Prior to an F or L suffix

In positions where a string of digits is expected

The following examples demonstrate valid and invalid underscore placements (which are highlighted) in numeric literals:

int x6 = 0x_52; // Invalid; cannot put underscores at the beginning of a number

int x7 = 0x5_2; // OK (hexadecimal literal)

int x8 = 0x52_; // Invalid; cannot put underscores at the end of a number

int x9 = 0_52; // OK (octal literal)

int x10 = 05_2; // OK (octal literal)

int x11 = 052_; // Invalid; cannot put underscores at the end of a number

Binary literals

In Java SE 7, the integral types (byte, short, int, and long) can also be expressed using the binary number system. To specify a binary literal, add the prefix 0b or 0B to the number. The following examples show binary literals:

// An 8-bit 'byte' value:

byte aByte = (byte)0b00100001;

// A 16-bit 'short' value:

short aShort = (short)0b1010000101000101;

// Some 32-bit 'int' values:

int anInt1 = 0b10100001010001011010000101000101;

int anInt2 = 0b101;

int anInt3 = 0B101; // The B can be upper or lower case.

// A 64-bit 'long' value. Note the "L" suffix:

long aLong = 0b1010000101000101101000010100010110100001010001011010000101000101L;

Fork and Join

The effective use of parallel cores in a Java program has always been a challenge. There were few home-grown frameworks that would distribute the work across multiple cores and then join them to return the result set. Java 7 has incorporated this feature as a Fork and Join framework.

Basically the Fork-Join breaks the task at hand into mini-tasks until the mini-task is simple enough that it can be solved without further breakups. It's like a divide-and-conquer algorithm. One important concept to note in this framework is that ideally no worker thread is idle. They implement a work-stealing algorithm in that idle workers "steal" the work from those workers who are busy.

The core classes supporting the Fork-Join mechanism are ForkJoinPool and ForkJoinTask. The ForkJoinPool is basically a specialized implementation of ExecutorService implementing the work-stealing algorithm.

Supporting dynamism

Java is a statically typed language — the type checking of the variables, methods and return values is performed at compile time. The JVM executes this strongly-typed bytecode at runtime without having to worry about finding the type information.

There's another breed of typed languages — the dynamically typed languages. Ruby, Python and Clojure are in this category. The type information is unresolved until runtime in these languages. This is not possible in Java as it would not have any necessary type information.

There is an increasing pressure on Java folks improvise running the dynamic languages efficiently. Although it is possible to run these languages on a JVM (using Reflection), it's not without constraints and restrictions.

In Java 7, a new feature called invokedynamic was introduced. This makes VM changes to incorporate non-Java language requirements. A new package, java.lang.invoke, consisting of classes such as MethodHandle, CallSite and others, has been created to extend the support of dynamic languages.

Use the final keyword liberally to communicate your intent. The final keyword has more than one meaning :

a final class cannot be extended

a final method cannot be overridden

final fields, parameters, and local variables cannot change their value once set

In the last case, "value" for primitives is understood in the usual sense, while "value" for objects means the object's identity, not its state. Once the identity of a final object reference is set, it can still change its state, but not its identity. Declaring primitive fields as final automatically ensures thread-safety for that field.

Some habitually declare parameters as final, since this almost always is the desired behaviour. Others find this verbose, and of little real benefit.

Consistently using final with local variables (when appropriate) can be useful as well. It brings attention to the non-final local variables, which usually have more logic associated with them (for example, result variables, accumulators, loop variables). Many find this verbose. A reasonable approach is to use final for local variables only if there is at least one non-final local variable in the method ; this serves to quickly distinguish the non-final local variables from the others.

Using final :

clearly communicates your intent

allows the compiler and virtual machine to perform minor optimizations

clearly flags items which are simpler in behavior - final says, "If you are looking for complexity, you won't find it here."

Null pointer exceptions(NPE) are the undoubtedly the most common and most annoying errors. In most cases I have observed that it could have been avoided by simply sticking to some best coding practices while writing code. Here is an example of a potential NPE.

Here if the "status" that is passed to the method is null you will get a NPE at the first statement in the method. However if we write the code like as follows it can be avoided.

//GOOD CODE

private Boolean isExpired(final StatusEnum status) {

if (StatusEnum.EXPIRED.equals(status)) { // Move variable part as parameter to equals method.

return Boolean.TRUE;

} else {

return Boolean.FALSE;

}

}

Some editors like IntelliJ provide a quick fix for similar use cases. If you have object.equals(”string literal”) it can replace with “string literal”.equals(object) and you can do the replace all on your entire code base in one go if you wish.

In the book "Effective Java", Joshua Bloch makes this compelling recommendation :

"Classes should be immutable unless there's a very good reason to make them mutable....If a class cannot be made immutable, limit its mutability as much as possible."

Immutable objects are objects whose data or properties cannot be changed after it is constructed. JDK has a number of immutable class like String and Integer. Immutable objects have big list of compelling positive qualities and they can greatly simplify your program. Immutable objects :

are simple to construct, test, and use

are automatically thread-safe and have no synchronization issues

do not need a copy constructor

do not need an implementation of clone

allow hashCode to use lazy initialization, and to cache its return value

do not need to be copied defensively when used as a field

make good Map keys and set elements (these objects must not change state while in the collection)

have their class invariant established once upon construction, and it never needs to be checked again

always have "failure atomicity" (a term used by Joshua Bloch in his book "Effective Java") : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

They are among the simplest and most robust kinds of classes you can possibly build in JAVA. When you create immutable classes, entire categories of problems simply disappear.

Make a class immutable by following these guidelines :

ensure the class cannot be overridden - make the class final, or use static factories and keep constructors private

make fields private and final

force callers to construct an object completely in a single step, instead of using a no-argument constructor combined with subsequent calls to setXXX methods (that is, avoid the Java Beans convention)

do not provide any methods which can change the state of the object in any way - not just setXXX methods, but any method which can change state

if the class has any mutable object fields, then they must be defensively copied when passed between the class and its caller

Here is an example. Please go through the comments that are there in the code to understand the concept.

package com.javagyan.examples;

import java.util.Date;

/**

* Planet is an immutable class, since there is no way to change its state after construction.

*/

public final class Planet {

public Planet (double aMass, String aName, Date aDateOfDiscovery) {

fMass = aMass;

fName = aName;

//make a private copy of aDateOfDiscovery

//this is the only way to keep the fDateOfDiscovery

//field private, and shields this class from any changes that

//the caller may make to the original aDateOfDiscovery object

fDateOfDiscovery = new Date(aDateOfDiscovery.getTime());

}

/**

* Returns a primitive value.

*

* The caller can do whatever they want with the return value, without

* affecting the internals of this class. Why? Because this is a primitive

* value. The caller sees its "own" double that simply has the

* same value as fMass.

* @return double

*/

public double getMass() {

return fMass;

}

/**

* Returns an immutable object.

*

* The caller gets a direct reference to the internal field. But this is not

* dangerous, since String is immutable and cannot be changed.

* @return String

*/

public String getName() {

return fName;

}

// /**

// * Returns a mutable object - likely bad style.

// *

// * The caller gets a direct reference to the internal field. This is usually dangerous,

// * since the Date object state can be changed both by this class and its caller.

// * That is, this class is no longer in complete control of fDate.

// */

// public Date getDateOfDiscovery() {

// return fDateOfDiscovery;

// }

/**

* Returns a mutable object - good style.

*

* Returns a defensive copy of the field.

* The caller of this method can do anything they want with the

* returned Date object, without affecting the internals of this

* class in any way. Why? Because they do not have a reference to

* fDate. Rather, they are playing with a second Date that initially has the

* same data as fDate.

* @return Date

*/

public Date getDateOfDiscovery() {

return new Date(fDateOfDiscovery.getTime());

}

// PRIVATE //

/**

* Final primitive data is always immutable.

*/

private final double fMass;

/**

* An immutable object field. (String objects never change state.)

*/

private final String fName;

/**

* A mutable object field. In this case, the state of this mutable field

* is to be changed only by this class. (In other cases, it makes perfect

* sense to allow the state of a field to be changed outside the native

* class; this is the case when a field acts as a "pointer" to an object

* created elsewhere.)

*/

private final Date fDateOfDiscovery;

}

Sanjeev Kumarsanjeevonline@gmail.comcreatingimmutableclasses5https://sites.google.com/a/javagyan.com/javagyan/useful-tips/creatingimmutableclasseshttps://sites.google.com/feeds/content/javagyan.com/javagyan/28237429028101154432010-12-29T11:46:09.036Z2011-11-11T14:17:47.517Z2011-11-11T14:17:47.515ZChange Default Author Name for JavaDocs in Eclipse

The auto generated Java docs at the class level picks the user name from the system user. This could result in weird author names in your code files as in an organization usernames are usually as per organizational naming conventions. For example my user name that I use to login is something like SK0012345 and you will agree that it wouldn't look good as an author name in a Java file and might not make any sense to most other viewers of the code.

/**

* Test default author in JavaDocs

* @author SK0012345

*/

public class TestClass {

}

Here is a quick way to change the default author name in your Eclipse projects. Simply edit your eclipse.ini file found in the root directory where you placed Eclipse. I have Eclipse at C:\devtools\development\eclipse, so my path would be C:\devtools\development\eclipse\eclipse.ini. Once editing this file add the following line and save.

-Duser.name=Sanjeev Kumar

After saving restart Eclipse and when you do a JavaDoc comment and use the author attribute by typing @author and pressing enter on the autocomplete you will see something like this:

These keybindings are made by Carsten Ullrich (cullrich located-at activemath.org) and are released under the Creative Commons Attribution-ShareAlike 2.5 License (http://creativecommons.org/licenses/by-sa/2.5/).

When generics were introduced in JDK 1.5, raw types were retained only to maintain backwards compatibility with older versions of Java. Although using raw types is still possible, they should be avoided for following reasons :

they usually require casts

they aren't type safe, and some important kinds of errors will only appear at runtime

they are less expressive, and don't self-document in the same way as parameterized types

Unless you are using a JDK version prior to 1.5 I don't see a reason why you should not use parameterized types in your code.

In most cases switch will be lighter and performs faster than an if/else ladder. The compiler is able to optimize switch statements into a lookup table and perform compile-time checking for literals when dealing with enumerations, so I'd suggest that it's usually preferable to use switch over if/else if if you're dealing with numeric or enum types in Java.

Sometimes it may be required to display an array in string format. Expected result should be that all the elements of the Arrays gets printed out the way it is done for any collection. However the default toString() method is not very informative and does not include any array content. I have seen developers doing weird stuff to achieve this which includes writing helper methods and quite a few lines of code.

However there are easier ways available with in Java API to achieve the same output. To provide more useful representations of arrays, various toString methods (and the deepToString method) were added to the Arrays class in JDK 1.5. Those methods can be used when available, as in :

Trust me on this one, I have learnt it the hard way. To start with lets start with the understanding of the terms composition and inheritance

Composition

Functionality of an object is made up of an aggregate of different classes.

In practice, this means holding a pointer to another class to which work is deferred.

Collaboration is implemented simply by forwarding all calls to an object field

it has no dependence on implementation details of the object field

it is more flexible, since it is defined dynamically at run-time, not statically at compile-time

Inheritance

Functionality of an object is made up of it's own functionality plus functionality from its parent classes.

Sub-classing or inheritance has the following issues :

it violates encapsulation, since the implementations of the superclass and subclass become tightly coupled

new methods added to the superclass can break the subclass

superclass and subclass need to evolve in parallel

designing a class so that it may be safely extended takes extra work - extending a class not explicitly designed for such use is risky

different packages are often under the control of different programmers - extending a class in a different package is risky

A common exception is the template method design pattern. There, the safest style is to make all items in the abstract base class final, except for the single abstract method which needs to be implemented by the subclass.

Allowing a method to be overridden should always be done intentionally, not by accident. Any method which is not private, static, or final can be overridden. Over-ridable methods, and any methods which call them, represent unusual places in your code, to which special attention must be paid. This is because sub-classing violates encapsulation, in the sense that it is possible for a subclass to break its superclass's contract. If you do not intend a method to be overridden, then you should declare it as private, static, or final.

When you do override a method, you should use the @Override annotation. This allows the compiler to verify that the method is indeed a valid override of an existing method. For example, your implementations of toString, equals, and hashCode should always use @Override in the method header.

Sanjeev Kumarsanjeevonline@gmail.comoverridablemethodsneedspecialcare2https://sites.google.com/a/javagyan.com/javagyan/useful-tips/overridablemethodsneedspecialcarehttps://sites.google.com/feeds/content/javagyan.com/javagyan/43754467291697898592010-12-29T18:23:16.014Z2011-11-11T14:05:37.052Z2011-11-11T14:05:37.051ZBeware of floating point numbers

Outside of a scientific or engineering context, the use of float and double (and the corresponding wrapper classes Float and Double ) should likely be avoided. The fundamental problem is that rounding errors will always occur when using these data types - they are unavoidable.

In a typical business application, using float and double to represent money values is dangerous, because of these rounding issues. Instead, BigDecimal should usually be used to represent money. It is also a common practice to have utility classes like "MonetaryAmount.java" that are wrappers over "BigDecimal" class of the JDK and that provides helper methods and functionality as needed by the business application.

From an IBM article on this topic :

"...binary floating-point arithmetic should not be used for financial, commercial, and user-centric applications or web services because the decimal data used in these applications cannot be represented exactly using binary floating-point."

From an article by Brian Goetz :

"...it is a bad idea to use floating point to try to represent exact quantities like monetary amounts. Using floating point for dollars-and-cents calculations is a recipe for disaster. Floating point numbers are best reserved for values such as measurements, whose values are fundamentally inexact to begin with."

To debug your application on JBOSS server you would need to enable the debugging on your JBOSS application server. By default it is turned off. In order to set jboss app server to be running in debugging mode, you should uncomment following line in "jboss-5.1.0.GA/jboss/bin/run.conf"

To ensure that the code that you write is always clean and complaint to
your project specific coding standards and guidelines, it is important
that you configure your eclipse to effectively use its compiler settings, Formatter, CheckStyle
and related built in features. Most developers wouldn't bother to do so
but trust me that it is huge time saver in long run and shall always keep your code
quality under check.

Following spread sheet has the configuration
that we use in our current projects. You can customize these settings
according to your development standards and needs.

Eclipse Compiler Settings

Google Spreadsheet

Also ideally your
Eclipse should be configured with your "code formatter" and "CheckStyle"
XML configurations. To start with you can simply import these xmls
that are attached to this page to enable code formatting and checkstyle for your
code. You can get more detail on CheckStyle at
http://checkstyle.sourceforge.net/.

Links to configure the CheckStyle, Formatter and compiler setting can be found under your Eclipse-->Preferences menu.

Sanjeev Kumarsanjeevonline@gmail.comusingeclipseeffectively2https://sites.google.com/a/javagyan.com/javagyan/useful-tips/usingeclipseeffectivelyhttps://sites.google.com/feeds/content/javagyan.com/javagyan/42590848887600433992011-08-18T09:47:11.463Z2011-11-11T13:59:21.351Z2011-11-11T13:59:21.348ZSkip over certain classes when using Step Into(F5) in Eclipse’s debugger

Whenever I use the Step Into feature (F5) in Eclipse’s debugger, I’m
mainly interested in stepping through code in my own classes, not the
ones from external libraries or even Java classes.

For example, there’s almost no reason to ever want to step into
Spring’s code or proxy classes (other than to learn more about them or
maybe debug a potential bug in Spring). And normally I’m not interested
in Java util classes (eg. ArrayList). This also goes for Hibernate,
Apache Commons, Google and many other external libraries.

Fortunately, Eclipse makes it easy to specify which classes to skip
by allowing step filters. This makes it easier to focus on your own code
and also keeps your editor area clean since Eclipse won’t be opening
classes in separate editors all the time.

Enable Step Filters

To use step filters, the first step is to enable it. Eclipse comes
with some default filters so let’s start by enabling them all. This will
filter all Java classes (ie. java.*) and all Sun classes (ie. sun.*).

Go to Window > Preferences > Java > Debug > Step Filtering.

Select the option Use Step Filters.

Click Select All to enable all the filters.

Leave the other options below the filter list as-is.

Here’s an example of what it should look like:

NB: Even with step filters enabled, you can still
set breakpoints within any of these classes (if you have the source) and
Eclipse will still stop at the breakpoint. Step filtering only affects
the way that Step Into works.

Also note that Eclipse will still step into your classes if they’re called from the ignored classes. For example, when you call Collections.sort(List, Comparator) and pass your own Comparator
implementation, Eclipse will not step into the sort code, but it will
step into your Comparator when it’s called by the sort code.

If you want to change this behaviour (ie. prevent Eclipse from stopping in your method), then deselect Step through filters.
However, I’d recommend only doing this if you’ve tried out the default,
because most times you’ll probably want to step through your own code.

The next step is to create some step filters of your own.

Creating your own step filters

Once you’ve enabled step filters, all you have to do is add the
classes you want to filter. Let’s assume that we want to ignore all
classes from Spring, especially proxy classes.

Click Add Filter… A dialog should appear prompting you to enter a pattern.

Enter a regular expression for the classes you want to filter in the Pattern to filter field then click Ok. In our example, enter org.springframework.* (see image below). It’s easier to specify the top level package name with an asterix at the end.

Add another filter with the pattern $Proxy* to skip over Spring proxy classes (eg. when using Spring Transactions).

Click Ok on the Preferences dialog when you’re done.

Here’s what the step filter pattern dialog should look like:

Now when you use the debugger, you won’t be taken into Spring classes when you use Step Into (F5).

Some ideas for custom filters

In addition to the Spring classes, you might also want to consider
adding the following common libraries to your step filters to make
debugging easier:

org.apache.*

org.hibernate.*

com.google.*

org.eclipse.*

org.osgi.*

The last two are especially useful if you’re doing Eclipse RCP and/or OSGi development.

Sanjeev Kumarsanjeevonline@gmail.comskipovercertainclasseswhenusingstepintoineclipse’sdebugger5https://sites.google.com/a/javagyan.com/javagyan/useful-tips/skipovercertainclasseswhenusingstepintoineclipse%E2%80%99sdebuggerhttps://sites.google.com/feeds/content/javagyan.com/javagyan/1880614945274120802010-12-31T13:18:06.081Z2010-12-31T13:21:23.239Z2010-12-31T13:21:23.223ZOverloading can be tricky

Extra care must be taken while writing Overloading methods. The compiler decides which version of an overloaded method will be called based on declared compile-time type, not run-time type. For the case in which overloaded methods have the same number of arguments, the rules regarding this decision can sometimes be a bit tricky.

If there may be confusion, you may simplify the design:

use different method names, and avoid overloading altogether

retain overloading, but ensure each method has a distinct number of arguments

In addition, it is recommended that varargs not be used when a method is overloaded, since this makes it more difficult to determine which overload is being called.

Reminder : Overloading requires methods with distinct signatures. The signature of a method includes its name and the ordered list of its argument types. All other items appearing in a method header, such as exceptions, return type, final, and synchronized, do not contribute to a method's signature.