3/31/2007

Prior to Java 5, isAlive() was commonly used to test a threads state. If isAlive() returned false the thread was either new or terminated but there was simply no way to differentiate between the two. Starting with the release of Tiger (Java 5) you can now get what state a thread is in by using the getState() method which returns an Enum of Thread.States. A thread can only be in one of the following states at a given point in time.

NEW A Fresh thread that has not yet started to execute.RUNNABLE A thread that is executing in the Java virtual machine.BLOCKED A thread that is blocked waiting for a monitor lock.WAITING A thread that is wating to be notified by another thread.TIMED_WAITING A thread that is wating to be notified by another thread for a specific amount of timeTERMINATED A thread whos run method has ended.

3/29/2007

JPC is a pure Java emulation of an x86 PC with fully virtual peripherals. It runs anywhere you have a JVM, whether x86, RISC, mobile phone, set-top box, possibly even your refrigerator! All this, with the bulletproof security and stability of Java technology.

JPC creates a virtual machine upon which you can install your favourite operating system in a safe, flexible and powerful way. It aims to give you complete control over your favorite PC software's execution environment, whatever your real hardware or operating system, and JPC's multilayered security makes it the safest solution for running the most dangerous software in quarantine - ideal for archiving viruses, hosting honeypots, and protecting your machine from malicious or unstable software.

JPC has been developed since August 2005 in Oxford University's Subdepartment of Particle Physics. It can be run on a number of devices, from PC's to mobile phones, and you can see some of the results of JPC in action (more soon!). Some might see JPC as part of a nefarious plot by mad scientists who want to harness every last CPU in the world for their research - but we prefer to see JPC as Java-hardened protection against their buggy programs.

3/28/2007

Faster Deep Copies of Java Objects

The java.lang.Object root superclass defines a clone() method that will, assuming the subclass implements the java.lang.Cloneable interface, return a copy of the object. While Java classes are free to override this method to do more complex kinds of cloning, the default behavior of clone() is to return a shallow copy of the object. This means that the values of all of the origical object’s fields are copied to the fields of the new object.

A property of shallow copies is that fields that refer to other objects will point to the same objects in both the original and the clone. For fields that contain primitive or immutable values (int, String, float, etc…), there is little chance of this causing problems. For mutable objects, however, cloning can lead to unexpected results. Figure 1 shows an example.

import java.util.Vector;

public class Example1 {

public static void main(String[] args) {

// Make a Vector

Vector original = new Vector();

// Make a StringBuffer and add it to the Vector

StringBuffer text = new StringBuffer(”The quick brown fox”);

original.addElement(text);

// Clone the vector and print out the contents

Vector clone = (Vector) original.clone();

System.out.println(”A. After cloning”);

printVectorContents(original, “original”);

printVectorContents(clone, “clone”);

System.out.println(

“——————————————————–”);

System.out.println();

// Add another object (an Integer) to the clone and

// print out the contents

clone.addElement(new Integer(5));

System.out.println(”B. After adding an Integer to the clone”);

printVectorContents(original, “original”);

printVectorContents(clone, “clone”);

System.out.println(

“——————————————————–”);

System.out.println();

// Change the StringBuffer contents

text.append(” jumps over the lazy dog.”);

System.out.println(”C. After modifying one of original’s elements”);

printVectorContents(original, “original”);

printVectorContents(clone, “clone”);

System.out.println(

“——————————————————–”);

System.out.println();

}

public static void printVectorContents(Vector v, String name) {

System.out.println(” Contents of \”" + name + “\”:”);

// For each element in the vector, print out the index, the

// class of the element, and the element itself

for (int i = 0; i < v.size(); i++) {

Object element = v.elementAt(i);

System.out.println(” ” + i + ” (” +

element.getClass().getName() + “): ” +

element);

}

System.out.println();

}

}

Figure 1. Modifying Vector contents after cloning

In this example we create a Vector and add a StringBuffer to it. Note that StringBuffer (unlike, for example, String is mutable — it’s contents can be changed after creation. Figure 2 shows the output of the example in Figure 1.

In the first block of output (”A”), we see that the clone operation was successful: The original vector and the clone have the same size (1), content types, and values. The second block of output (”B”) shows that the original vector and its clone are distinct objects. If we add another element to the clone, it only appears in the clone, and not in the original. The third block of output (”C”) is, however, a little trickier. Modifying the StringBuffer that was added to the original vector has changed the value of the first element of both the original vector and its clone. The explanation for this lies in the fact that clone made a shallow copy of the vector, so both vectors now point to the exact same StringBuffer instance.

This is, of course, sometimes exactly the behavior that you need. In other cases, however, it can lead to frustrating and inexplicable errors, as the state of an object seems to change “behind your back”.

The solution to this problem is to make a deep copy of the object. A deep copy makes a distinct copy of each of the object’s fields, recursing through the entire graph of other objects referenced by the object being copied. The Java API provides no deep-copy equivalent to Object.clone(). One solution is to simply implement your own custom method (e.g., deepCopy()) that returns a deep copy of an instance of one of your classes. This may be the best solution if you need a complex mixture of deep and shallow copies for different fields, but has a few significant drawbacks:

You must be able to modify the class (i.e., have the source code) or implement a subclass. If you have a third-party class for which you do not have the source and which is marked final, you are out of luck.

You must be able to access all of the fields of the class’s superclasses. If significant parts of the object’s state are contained in private fields of a superclass, you will not be able to access them.

You must have a way to make copies of instances of all of the other kinds of objects that the object references. This is particularly problematic if the exact classes of referenced objects cannot be known until runtime.

Custom deep copy methods are tedious to implement, easy to get wrong, and difficult to maintain. The method must be revisited any time a change is made to the class or to any of its superclasses.

A common solution to the deep copy problem is to use Java Object Serialization (JOS). The idea is simple: Write the object to an array using JOS’s ObjectOutputStream and then use ObjectInputStream to reconsistute a copy of the object. The result will be a completely distinct object, with completely distinct referenced objects. JOS takes care of all of the details: superclass fields, following object graphs, and handling repeated references to the same object within the graph. Figure 3 shows a first draft of a utility class that uses JOS for making deep copies.

import java.io.IOException;

import java.io.ByteArrayInputStream;

import java.io.ByteArrayOutputStream;

import java.io.ObjectOutputStream;

import java.io.ObjectInputStream;

/**

* Utility for making deep copies (vs. clone()’s shallow copies) of

* objects. Objects are first serialized and then deserialized. Error

* checking is fairly minimal in this implementation. If an object is

* encountered that cannot be serialized (or that references an object

* that cannot be serialized) an error is printed to System.err and

* null is returned. Depending on your specific application, it might

* make more sense to have copy(…) re-throw the exception.

*

* A later version of this class includes some minor optimizations.

*/

public class UnoptimizedDeepCopy {

/**

* Returns a copy of the object, or null if the object cannot

* be serialized.

*/

public static Object copy(Object orig) {

Object obj = null;

try {

// Write the object out to a byte array

ByteArrayOutputStream bos = new ByteArrayOutputStream();

ObjectOutputStream out = new ObjectOutputStream(bos);

out.writeObject(orig);

out.flush();

out.close();

// Make an input stream from the byte array and read

// a copy of the object back in.

ObjectInputStream in = new ObjectInputStream(

new ByteArrayInputStream(bos.toByteArray()));

obj = in.readObject();

}

catch(IOException e) {

e.printStackTrace();

}

catch(ClassNotFoundException cnfe) {

cnfe.printStackTrace();

}

return obj;

}

}

Figure 3. Using Java Object Serialization to make a deep copy

Unfortunately, this approach has some problems, too:

It will only work when the object being copied, as well as all of the other objects references directly or indirectly by the object, are serializable. (In other words, they must implement java.io.Serializable.) Fortunately it is often sufficient to simply declare that a given class implements java.io.Serializable and let Java’s default serialization mechanisms do their thing.

Java Object Serialization is slow, and using it to make a deep copy requires both serializing and deserializing. There are ways to speed it up (e.g., by pre-computing serial version ids and defining custom readObject() and writeObject() methods), but this will usually be the primary bottleneck.

The byte array stream implementations included in the java.io package are designed to be general enough to perform reasonable well for data of different sizes and to be safe to use in a multi-threaded environment. These characteristics, however, slow down ByteArrayOutputStream and (to a lesser extent) ByteArrayInputStream.

The first two of these problems cannot be addressed in a general way. We can, however, use alternative implementations of ByteArrayOutputStream and ByteArrayInputStream that makes three simple optimizations:

ByteArrayOutputStream, by default, begins with a 32 byte array for the output. As content is written to the stream, the required size of the content is computed and (if necessary), the array is expanded to the greater of the required size or twice the current size. JOS produces output that is somewhat bloated (for example, fully qualifies path names are included in uncompressed string form), so the 32 byte default starting size means that lots of small arrays are created, copied into, and thrown away as data is written. This has an easy fix: construct the array with a larger inital size.

All of the methods of ByteArrayOutputStream that modify the contents of the byte array are synchronized. In general this is a good idea, but in this case we can be certain that only a single thread will ever be accessing the stream. Removing the synchronization will speed things up a little. ByteArrayInputStream’s methods are also synchronized.

The toByteArray() method creates and returns a copy of the stream’s byte array. Again, this is usually a good idea: If you retrieve the byte array and then continue writing to the stream, the retrieved byte array should not change. For this case, however, creating another byte array and copying into it merely wastes cycles and makes extra work for the garbage collector.

An optimized implementation of ByteArrayOutputStream is shown in Figure 4.

The getInputStream() method returns an instance of an optimized version of ByteArrayInputStream that has unsychronized methods. The implementation of FastByteArrayInputStream is shown in Figure 5.

import java.io.InputStream;

import java.io.IOException;

/

* ByteArrayInputStream implementation that does not synchronize methods.

*/

public class FastByteArrayInputStream extends InputStream {

/

* Our byte buffer

*/

protected byte[] buf = null;

/

* Number of bytes that we can read from the buffer

*/

protected int count = 0;

/

* Number of bytes that have been read from the buffer

*/

protected int pos = 0;

public FastByteArrayInputStream(byte[] buf, int count) {

this.buf = buf;

this.count = count;

}

public final int available() {

return count - pos;

}

public final int read() {

return (pos < count) ? (buf[pos++] & 0xff) : -1;

}

public final int read(byte[] b, int off, int len) {

if (pos >= count)

return -1;

if ((pos + len) > count)

len = (count - pos);

System.arraycopy(buf, pos, b, off, len);

pos += len;

return len;

}

public final long skip(long n) {

if ((pos + n) > count)

n = count - pos;

if (n < 0)

return 0;

pos += n;

return n;

}

}

Figure 5. Optimized version of ByteArrayInputStream.

Figure 6 shows a version of a deep copy utility that uses these classes:

import java.io.IOException;

import java.io.ByteArrayInputStream;

import java.io.ByteArrayOutputStream;

import java.io.ObjectOutputStream;

import java.io.ObjectInputStream;

/**

* Utility for making deep copies (vs. clone()’s shallow copies) of

* objects. Objects are first serialized and then deserialized. Error

* checking is fairly minimal in this implementation. If an object is

* encountered that cannot be serialized (or that references an object

* that cannot be serialized) an error is printed to System.err and

* null is returned. Depending on your specific application, it might

* make more sense to have copy(…) re-throw the exception.

*/

public class DeepCopy {

/**

* Returns a copy of the object, or null if the object cannot

* be serialized.

*/

public static Object copy(Object orig) {

Object obj = null;

try {

// Write the object out to a byte array

FastByteArrayOutputStream fbos =

new FastByteArrayOutputStream();

ObjectOutputStream out = new ObjectOutputStream(fbos);

out.writeObject(orig);

out.flush();

out.close();

// Retrieve an input stream from the byte array and read

// a copy of the object back in.

ObjectInputStream in =

new ObjectInputStream(fbos.getInputStream());

obj = in.readObject();

}

catch(IOException e) {

e.printStackTrace();

}

catch(ClassNotFoundException cnfe) {

cnfe.printStackTrace();

}

return obj;

}

}

Figure 6. Deep-copy implementation using optimized byte array streams

The extent of the speed boost will depend on a number of factors in your specific application (more on this later), but the simple class shown in Figure 7 tests the optimized and unoptimized versions of the deep copy utility by repeatedly copying a large object.

import java.util.Hashtable;

import java.util.Vector;

import java.util.Date;

public class SpeedTest {

public static void main(String[] args) {

// Make a reasonable large test object. Note that this doesn’t

// do anything useful — it is simply intended to be large, have

// several levels of references, and be somewhat random. We start

// with a hashtable and add vectors to it, where each element in

// the vector is a Date object (initialized to the current time),

// a semi-random string, and a (circular) reference back to the

// object itself. In this case the resulting object produces

// a serialized representation that is approximate 700K.

Hashtable obj = new Hashtable();

for (int i = 0; i < 100; i++) {

Vector v = new Vector();

for (int j = 0; j < 100; j++) {

v.addElement(new Object[] {

new Date(),

"A random number: " + Math.random(),

obj

});

}

obj.put(new Integer(i), v);

}

int iterations = 10;

// Make copies of the object using the unoptimized version

// of the deep copy utility.

long unoptimizedTime = 0L;

for (int i = 0; i < iterations; i++) {

long start = System.currentTimeMillis();

Object copy = UnoptimizedDeepCopy.copy(obj);

unoptimizedTime += (System.currentTimeMillis() - start);

// Avoid having GC run while we are timing...

copy = null;

System.gc();

}

// Repeat with the optimized version

long optimizedTime = 0L;

for (int i = 0; i < iterations; i++) {

long start = System.currentTimeMillis();

Object copy = DeepCopy.copy(obj);

optimizedTime += (System.currentTimeMillis() - start);

// Avoid having GC run while we are timing...

copy = null;

System.gc();

}

System.out.println("Unoptimized time: " + unoptimizedTime);

System.out.println(" Optimized time: " + optimizedTime);

}

}

Figure 7. Testing the two deep copy implementations.

A few notes about this test:

The object that we are copying is large. While somewhat random, it will generally have a serialized size of around 700 Kbytes.

The most significant speed boost comes from avoid extra copying of data in FastByteArrayOutputStream. This has several implications:

Using the unsynchronized FastByteArrayInputStream speeds things up a little, but the standard java.io.ByteArrayInputStream is nearly as fast.

Performance is mildly sensitive to the initial buffer size in FastByteArrayOutputStream, but is much more sensitive to the rate at which the buffer grows. If the objects you are copying tend to be of similar size, copying will be much faster if you initialize the buffer size and tweak the rate of growth.

Measuring speed using elapsed time between two calls to System.currentTimeMillis() is problematic, but for single-threaded applications and testing relatively slow operations it is sufficient. A number of commercial tools (such as JProfiler) will give more accurate per-method timing data.

Testing code in a loop is also problematic, since the first few iterations will be slower until HotSpot decides to compile the code. Testing larger numbers of iterations aleviates this problems.

Garbage collection further complicates matters, particularly in cases where lots of memory is allocated. In this example, we manually invoke the garbage collector after each copy to try to keep it from running while a copy is in progress.

These caveats aside, the performance difference is sigificant. For example, the code as shown in Figure 7 (on a 500Mhz G3 Macintosh iBook running OSX 10.3 and Java 1.4.1) reveals that the unoptimized version requires about 1.8 seconds per copy, while the optimized version only requires about 1.3 seconds. Whether or not this difference is signficant will, of course, depend on the frequency with which your application does deep copies and the size of the objects being copied.

3/27/2007

Generally when we tend to use either Arryalist/ Vector based on the basic requirement if that has to be synchronized or not. Other than that we minimally consider the responsiveness of the algorithmic implementation for each at the requirements.

For example: if we know there will be 10 objects that we need to store and iterated every time. Given the fancy of API we normally tend to use ArrayList / Vector irrespective of thinking of Array which is more powerful and very good implementation for known size.

There are several other parameters we might need to understand as developers which implementation that we need to choose based on the requirements. Some of them could be

1. Insert elements at the end of a list 2. Insert elements in the beginning of a list 3. Insert elements at random positions in a list 4. Access elements from the first to the last 5. Access elements from the last to the first 6. Access elements in random order 7. Update elements in random order

DWR is a Java open source library which allows you to write Ajax web sites.

It allows code in a browser to use Java functions running on a web server just as if it was in the browser.

DWR consists of two main parts:

A Java Servlet running on the server that processes requests and sends responses back to the browser. JavaScript running in the browser that sends requests and can dynamically update the webpage.

DWR works by dynamically generating Javascript based on Java classes. The code does some Ajax magic to make it feel like the execution is happening on the browser, but in reality the server is executing the code and DWR is marshalling the data back and forwards.

This method of remoting functions from Java to JavaScript gives DWR users a feel much like conventional RPC mechanisms like RMI or SOAP, with the benefit that it runs over the web without requiring web-browser plug-ins.

Java is fundamentally synchronous where Ajax is asynchronous. So when you call a remote method, you provide DWR with a callback function to be called when the data has been returned from the network.

The diagram shows how DWR can alter the contents of a selection list as a result of some Javascript event like onclick.

DWR dynamically generates an AjaxService class in Javascript to match some server-side code. This is called by the eventHandler. DWR then handles all the remoting details, including converting all the parameters and return values between Javascript and Java. It then executes the supplied callback function (populateList) in the example below which uses a DWR utility function to alter the web page.

3/15/2007

Couple of differences between Tomcat and Oracle Application Server (OC4J)...There could be more...

org.w3c.dom.Document:getElementsByTagName() Tomcat (Xerces): doc.getElementsByTagName("SOAP-ENV:Envelope") is valid. It treats the namespace as if it were just part of the tag name. OC4J (oraclexmlparserv2): doc.getElementsByTagName("SOAP-ENV:Envelope") is not valid Solution: Use doc.getElementsByTagNameNS(). To use this method, you must make sure to call setNamespaceAware(true) on your DocumentBuilderFactory.

It's used when deSerializing an object, to determine whether that object is compatible with that object's class file in the JVM doing the deserialization. If the serialVersionUID of the class file doesn't match the serialVersionUID of the deserialized object, you'll get an InvalidClassException. If the class file doesn't explicitly declare a serialVersionUID, then the serialization runtime has to compute it, which is a relatively expensive process.

So what makes the Object that was serialized and its class incompatible? 1. Deleting fields from the class 2. Changing the type of a field 3. Changing a class from Serializable to Externalizable or vice versa

Keep the Pandoras box closed by adding a serialVersionUID to the class file.

Fed up with email messages bouncing because your attachments are too big? There's a simple solution -- get a very good new Firefox add-in, AllPeers.

It's a simple peer-to-peer file-sharing app. Set it up, select files you want to share, and who you want to share them with, and the person gets a notification. He can then grab them. It's that simple.

For the moment, AllPeers runs only under Firefox, but expect it to work with Internet Explorer some time in the future. If you've ever had a problem with sending a file via email, it's worth a look..!

3/09/2007

The Objective of the pattern is,at any given time there can be only one instance of a class.

A singleton pattern can be used to create a Connection Pool. We can have connection object as singleton to avoid wastage of resources.

Steps to create a singleton pattern,

We need to have default constructor of the class as private, which prevents instantiation of the object by other classes.

Define a static method that returns singleton object, If object doesn't exists new object is returned otherwise existing object would be returned.

To avoid the object being cloned, override the clone method of the Object class.

If a application is going to be Multithreaded one then there can be chances wherein two therads at the sametime can access the static method which may create more than one instance of the singleton class. To avoid the above scenario we need to have this static method as synchronized.

What..When...?
I always wonder, why Java is not intelligent enough to fix problems in the code when it knows something goes wrong; rather than throwing just exceptions? May be Gosling wants it that way, saving donuts for him!

3/05/2007

In the first part of this article, we will discuss how to integrate a feature-rich map into your application in record time, by using the Google Maps API. The Google Maps API is an easy-to-use JavaScript API that enables you to embed interactive maps directly in your application's web pages. And as we will see, it is easy to extend it to integrate real-time server requests using Ajax.

Getting started with the Google Maps API is easy. There is nothing to download; you just need to sign up to obtain a key to use the API. There is no charge for publicly accessible sites (for more details, see the Sign up for the Google Maps API page). You need to provide the URL of your website, and, when your application is deployed on a website, your key will only work from this URL. One annoying thing about this constraint is that you need to set up a special key to use for your development or test machines: for the sample code, I had to create a special key for http://localhost:8080/maps, for example.

Once you have a valid key, you can see the Google Maps API in action. Let's start off with something simple: displaying a map on our web page.

Although the API is not particularly complicated, working with Google Maps requires a minimal knowledge of JavaScript. You also need to know the latitude and longitude of the area you want to display. If you're not sure, you can find this sort of information on the internet, or even by looking in an atlas!

Finally, in the body, we display the map. The size and shape of the map are taken from the corresponding HTML element. The map is initialized when the page is loaded (via the onload event). In addition, when the user leaves the page, the GUnload() method is called (via the onunload event). This cleans up the map data structure in order to avoid memory leak problems that occur in Internet Explorer.

Panning and ZoomingNow that we can successfully display a map, let's try to add some zoom functionality. The Google Maps API lets you add a number of different controls to your map, including panning and zooming tools, a map scale, and a set of buttons letting you change between Map and Satellite views. In our example, we'll add a small pan/zoom control and an "Overview map" control, which places a small, collapsible overview map. You add controls using the addControl() method, as shown here: