Step2: The above command would have created the maven skeleton. Add the "resources" folder under "src/main" to store the spring context file. Open the pom.xml file under the application and add the following depencies jar files. The cgilib version 2 is required for the spring 3 jars.

main
User registration for skill complete
Registration Complete. Mail will be sent asynchronously.
SimpleAsyncTaskExecutor-1
started producing email content
email sending has been completed

Q. How do you know that the EmailSender runs asynchronously on a separate thread?
A. Firstly, the Thread.currentThread().getName() was added to print the thread names and you can see that the AppService runs on the "main" thread and the EmailSender runs on the "SimpleAsyncTaskExecutor-1" thread. Secondly, the Thread.sleep(3000) was added to demonstrate that "Registration Complete. Mail will be sent asynchronously." gets printed without being blocked. If you rerun the App.java again by commenting out the "@Async" annotation in "EmailSender", you will get a different output as shown below.

main
User registration for skill complete
main
started producing email content
email sending has been completed
Registration Complete. Mail will be sent asynchronously.

As you can see, there is only one thread named "main", and the order of the output is different.

May 29, 2013

Proxy design pattern for implementing thread safe wrappers

Q. What are the different ways you can make an object thread-safe?A.

Synchronize critical sections: An object's critical sections are those methods or blocks of code within methods that must be executed by only one thread at a time. By using Java's synchronized keyword, you can guarantee that only one thread at a time will ever execute the object's critical sections.

Make the object immutable. Immutable objects are, by their very nature, thread-safe simply because threads have to be able to write to an object's instance variables to experience a read/write or write/write conflict.

Use a thread-safe wrapper: by applying the proxy design pattern. let's have a look at an example.

Step 1: Here are the sample third party interface and implementation classes.

Many industrial Java applications are developed om a WIN32 platfor, but run on a target platform, which is Unix based. If you are a beginner to Unix, and would like to work on an emulator that runs on a Win32 platform, this post is for you. MobaXterm is a set of Unix commands (GNU/Cygwin) included in a single portable exe file. MobaXterm integrates an X server and several network clients (SSH, RDP, VNC, telnet, rlogin, sftp, ftp, ...) accessible through a tab-based terminal.

Step 1: Download MobaXterm from http://mobaxterm.mobatek.net/ for a personal use, that is learning to use Unix. Get the portable edition. This is zip file and extract the MobaXterm_Personal_.exe to a subfolder of your choice and then create a short-cut. Double clicking on the short-cut will bring up the MobaXterm window as shown below. Once it is launched, you will also see an ini file created named "MobaXterm.ini".

Step 2: Now, you can change to your C drive in Win32. One of two ways as shown below:

1. cd /drives/c2. cd /cygdrive/c

Step 3: You can now use this to practice the Unix posts from this and other blogs or books to enhance your Unix skills.

If you already had set up JAVA_HOME environment variable, you can echo it as shown below

Note: MobaXterm allows you to add number of other plugins from their web site like ksh shell, etc. All you have to do is download the plugin from the plugins tab in their website into your folder where the exe file is. Also be aware that Unix does not have the carriage return characters ( use dos2unix to convert dos files to Unix files) and also Unix files can't have spaces in between their names.

Step 4: You can set up setenv.sh bash script to set up your Java environment as shown below

Step 6: In Unix, you can use the Vi editor as your editor, and MobaXterm comes with VIM (Vi IMproved) editor.

vi test.txt

Step 7:You can query the current directory as shown below

pwd

and query the files and folders in it as

ls -ltr

Practice what ever commands you like to learn like

hostname
whoami
date

Note:

Unix systems have no central place like the Windows registry for storing configuration information. Instead, Unix configuration is spread over a fair number of different files. Many of these files live in a directory called /etc : the list of users is in a file called /etc/passwd, while the name of the machine is typically found in /etc/host.

On Unix machines, programs cannot use network ports less than 1024. Only the special root user can use these ports.

There is no file locking in Unix, so you can delete the file while it is executing, and it will continue to exist as long as some process (Which previously opened it) has an open handle for it. The directory entry for the file is removed when you delete it, so it cannot be opened any more, but processes already using this file can still use it. Once all processes using this file terminate, the file is deleted automatically.

May 28, 2013

SQL Interview Questions and Answers on deleting records

TRUNCATE TABLE_NAME always locks the table and page but not each row, whereas DELETE statement is executed using a row lock, each row in the table is locked for deletion.

Truncate removes all the records in the table whereas delete can be used with WHERE clause to remove records conditionally. That is remove only a handful number of records.

Truncate performance is much faster than Delete, as its logging is minimal wheres the Delete command logs every record.

Truncate does not retain the identity, whereas DELETE command retains the identity. When you use Truncate, If the table contains an identity column, the counter for that column is reset to the seed value that is defined for the column.

Truncate cleans up the object statistics and clears the allocated space whereas Delete retains the object statistics and allocated space.

TRUNCATE is a DDL (Data Definition Language) and DELETE is a DML (Data Manipulation Language).

The TRUNCATE command does not fire any triggers, whereas the DELETE command fires any triggers defined on the table. For example, to keep an audit trail of records that have been deleted by inserting the deleted records into an audit table via the DELETE triggers.

Q. When will you use a truncate command?A.TRUNCATE is useful for purging a table with huge amount of data. Alternatively, you can drop the table and recreate it that makes sense. Firing a delete command instead of a truncate command to empty a table with millions of records can result in locking the whole table and also can take longer time to complete, and at times cause the machine to hang.

The truncate command is executed as shown below.

TRUNCATE TABLE table_name

Q. Which command will you use to periodically purge data from your tables as part of a house keeping job?A. Use a DELETE command within a transaction with a WHERE clause to remove data that are older than 7 years. Remove large amount of data in batches as opposed to in a single transaction.

Q. How will you delete a few records from single tableA.

DELETE FROM parent p WHERE p.parent_name = 'Peter'

Q. How will you delete a few records from parent and child tables where the parent table with parent_name = 'Peter'?A.

Firstly, you need to delete the child records because the integrity constraint won't let you delete the parent record when there are child records.

Note: Please note the difference in syntax when you make a join with the child. When there is only a single table involved, it is "DELETE FROM table_name", but when there is a join, it is "DELETE table_name" and then the "FROM" with the join clauses.

Q. What do you do with the PURGE command?A. The purge command is used to clear the recycle bin. It is generally used with the DROP command. For example,

drop table tablename purge;

the above command will clear away the table from database as well as from the recycle bin. After executing the purge command, you cannot retrieve the table using a flashback query.

Q. Can you write a Unix script that archives files that are older tahn 7 days from a folder say /data/csv? The number of files in the folder /data/csv needs to be split into a group of 10 files. For example, if you had 25 *.csv files under the folder /data/csv, 3 tar files containing 10, 10, and 5 will be created.

A. Firstly, define a configuration file that contains the source dir, archive dir, how many days old, and split size. For example

zip.cfg file

/cygdrive/c/data/csv /cygdrive/c/data/csv/zip +7 10

Now, the shell script zip.sh file that reads the zip.cfg and archives the files.

-l: number of files-: input file is from the standard input
The last argument is the split file name.

The "ls" list the file names and the split command takes 2 file names at a time and creates files with names like split_prefixaa, split_prefixab, split_prefixac, etc and these files contains maximum 2 names from the ls. For example

Note: If you are running on Window, you can practice the above code by downloading the MobaXterm, which is a free Unix emulator for Windows. You need to download the files MobaXterm_Personal_5.0.exe, MobaXterm.ini, and for the korn shell download the plugin Pdksh.mxt3. Put all this files under a same folder and create a short-cut for MobaXterm_Personal_5.0.exe to start the MobaXterm window.

May 24, 2013

Java new I/O Interview Questions and Answers

The Java New IO is very powerful and it is all about performance!, performance!!, and performance!!!. But, you need to get your head around how the buffers work. Here are some questions and answers that will help you do that.
Q. How does the Java NIO (i.e New IO) introduced in Java version 1.4 differ from the old IO?A. The NIO is all about better performance.

IO

NIO (New IO)

Stream oriented. This means that you read one or more bytes at a time, from a stream. What you do with the read bytes is up to you. They are not cached anywhere. Furthermore, you cannot move forward and back in the data in a stream. If you need to move forward and back in the data read from a stream, you will need to buffer yourself first.

Buffer oriented. This means better performance. Data is read into a buffer from which it be processed later. You can move forward and back in the buffer. This gives you a bit more flexibility during processing. However, you also need to check if the buffer contains all the data you need in order to fully process it.

Blocking IO. This means when a thread invokes a read() or write(), that thread is blocked until there is some data to read, or the data is fully written. The thread can do nothing else in the meantime.

Non blocking IO. This means better performance. NIO's non-blocking mode enables a thread to request reading data from a channel, and only get what is currently available, or do nothing at all, if no data is currently available. Rather than remain blocked until data becomes available for reading, the thread can proceed with doing something else.

You need to create additional threads to read and write data in
parallel. More threads means more CPU context switching and consuming
more thread stacks.

Selectors. This also means better performance. NIO's selectors allow a single thread to monitor multiple channels of input. You can register multiple channels with a selector, then use a single thread to "select" the channels that have input available for processing, or select the channels that are ready for writing. This selector mechanism makes it easy for a single thread to manage multiple channels. This is also known as multiplexing. The Apache MINA package is based on Java NIO to write high performance networking applications. For example, a TCP server.

Here is the diagrammatic overview:

Q. How do you manipulate the buffer in NIO?A. The NIO buffer has following 3 state variables -- position, limit, and capacity

position: The position value keeps track of how much you have gotten from the buffer. It specifies from which array element the next byte will come. Thus, if you've written 1 bytes as shown below with shaded area to a channel from a buffer, that buffer's position will be set to 1 (starting from 0), referring to the second element of the array as shown in the diagram.

limit: The limit variable specifies how much room there is left to put data into -- in the case of reading from a channel into a buffer or how much data there is left to get -- in the case of writing from a buffer into a channel. The position is always less than, or equal to, the limit. The limit is 3 (counting from 0) in the diagram below.

capacity: The capacity of a buffer specifies the maximum amount of data that can be stored in the underlying array. The limit can never be larger than the capacity. The capacity is 6 (counting from 0) in the diagram below.

In the above diagram, the position starts with 0, and after reading 1 byte, it becomes 1 byte, and the limit value of 3 states that there are 2 more bytes to be read.

Q. What do you understand by terms flip( ) and clear( ) in NIO?A.

Flip

When you are ready to write your data to an output channel. You must call the flip() method. This method does two key things:

1. It sets the limit to the current position.
2. It sets the position to 0.

You are now ready to begin writing data to a channel from the buffer starting from 0 to the limit.

Clear

After you have done your writes, the final step is to call the buffer's clear() method. This method resets the buffer in preparation for receiving more bytes. Clear does two key things:

1. It sets the limit to match the capacity.
2. It sets the position to 0.

Q. What is in the NIO.2 package introduced in Java 7?A. The more New I/O APIs for the Java Platform (NIO.2) in Java 7 enhance the New I/O APIs (NIO) introduced in Java 1.4 by adding

The four asynchronous channels to the java.nio.channels package.

The java.io.File class is upgraded to the java.nio.file.Path that allows logical manipulating of paths in memory without accessing the file system.

The NIO.2 associates the notion of metadata like is hidden, is it a directory, size, access control, etc with attributes and provides access to them through the java.nio.file.attribute package. Since different file systems have different notions about which attributes should be tracked, NIO.2 groups the attributes into views, each of which maps to a particular file system implementation.

The NIO.2 provides support for both hard links and symbolic links and the Path class knows how to detect a link and will behave in the default manner if no configuration of behavior is specified.

The NIO.2 comes with a set of brand new methods for managing files and directories, such as create, read, write, move, delete, and so on. The most of these tasks are found in the java.nio.file.Files class.

The FileVisitor interface introduced in Java 7 makes recursing through directories and files a breeze.

The Watch Service API was introduced in Java 7 (NIO.2) as a thread-safe service that is capable of watching a directory for changes to its content through actions such as create, delete, and modify. This means, with NIO.2, you no longer need to poll the file system for changes or use other in-house solutions to monitor the file system changes.

Java 7 (NIO.2) introduces a new interface for working with Random Access Files with its SeekableByteChannel and an interface named NetworkChannel that provides network channel classes for tetworking.

May 23, 2013

Writing cross platform compatible Java code

Java is cross platform language in the sense that a compiled Java program runs on all platforms for which there exists a JVM like Windows, Mac OS and Unix. Having said this, there are scenarios where the Java programmers need to code things carefully. Experienced Java programmers will be well placed to answer the following question.

Q. Can you list some of the cross platform issues that a Java programmer needs to be mindful of based on your experience? A.

1. Carriage return and new line characters across different platforms.

If you are processing file like a CSV file and you need to parse the text line by line you need to be aware of the new line characters across different operating systems.

Windows: \r\n
Unix: \n
Mac: \r

Here is an example of the split function in Java that will work across different platforms. In Java extra "\" is used as an escape character.

The File.separator will take care of the cross platform compatibility by using correct separator for the platform. What is even better is to nest your File construction with the "public File(String parent,
String child)" constructor.

Threading priorities is another thing to consider across platforms. Other OS like Solaris for example has more thread priorities than windows. So, if you are working heavily on multi-threading, OS is something that may affect the program's behavior.

4. Using Native Code

Using native code (via JNI) can cause cross platform issues.

5. Beware of the System class

System.getProperty("os.name") is clearly OS dependent. The other most common one is System.exec() as it calls another application from your system, and you should know if the application you are calling works across other systems.

Even though Java is touted to be a Write Once Run Anywhere (WORA) type programming language, one needs to be aware of the above potential issues and test it properly across other platforms.Watch out for these gotchas in code reviews.

6. Character sets

When converting bytes to String or reading a file in different environments, it is imperative that we use the right character sets. Otherwise, you can have cross platform issues like character displayed properly in a Win32 platform, but not in a Unix platform and vice versa.

Recently UTF-8 has become the default encoding on many systems, but sometimes you have to deal with files originating from older systems with other encodings. ASCII is an encoding that uses 7 bits in mapping all US characters in saving the bytes into file. The UTF-8 was designed for backward compatibility with ASCII and to avoid the complications of endianness and byte order marks in UTF-16 and UTF-32The most useful and practical file encoding today is "UTF-8" because it support Unicode, and it's widely used in internet. UTF-8 encodes each of the 1,112,064 code points in the Unicode character set using one to four 8-bit bytes. The UTF-8 has become the dominant character encoding for the World-Wide Web.

May 21, 2013

Q. Can you explain the decorator design pattern?A. By implementing the decorator pattern you construct a wrapper around an object by extending its behavior. The wrapper will do its job before or after and delegate the call to the wrapped instance. The decoration happens at run-time. In Java, the wrapper classes like Integer, Double, etc are typical example of a decorator pattern. Another good example is the Java I/O classes as shown below. Each reader or writer will decorate the other to extend or modify the behavior.

As you can see, each reader extends the behavior at run-time. This is the power of object composition as opposed to inheritance. By composing a fewer classes at run-time, desired behavior can be created. Here is another example demonstrating an interleaved reading using a class from the Apache library.

import java.io.*;
import org.apache.commons.io.input.TeeInputStream;
class InterleavedReadingFromFile {
public static void main(String[] args) throws IOException {
// Create the source input stream.
InputStream is = new FileInputStream("c:\temp\persons.txt");
// Create a piped input stream for one of the readers.
PipedInputStream in = new PipedInputStream();
// Create a tee-splitter for the other reader. This is from the Apache library
TeeInputStream tee = new TeeInputStream(is, new PipedOutputStream(in));
// Create the two buffered readers.
BufferedReader br1 = new BufferedReader(new InputStreamReader(tee));
BufferedReader br2 = new BufferedReader(new InputStreamReader(in));
// You can now do interleaved reads
System.out.println("1 line from br1");
System.out.println(br1.readLine());
System.out.println("2 lines from br2:");
System.out.println(br2.readLine());
System.out.println(br2.readLine());
System.out.println();
System.out.println("1 line again from br1:");
System.out.println(br1.readLine());
System.out.println();
}
}

Q. Can you write a class using the decorator design pattern to print numbers from 1-10, and then decorators that optionally print only even or odd numbers?A. Java decorator pattern

Q. How does a decorator design pattern differ from a proxy design pattern?A. In Proxy pattern, you have a proxy and a real subject. The relationship between a proxy and the real subject is typically set at compile time, whereas decorators can be recursively constructed at run time. The Decorator Pattern is also known as the Wrapper pattern. The Proxy Pattern is also known as the Surrogate pattern. The purpose of decorator pattern is to add additional responsibilities to an object. These responsibilities can of course be added through inheritance, but composition provides better flexibility as explained above via the Java I/O classes. The purpose of the proxy pattern is to add an intermediate between the client and the target object. This intermediate shares the same interface as the target object. Here are some scenarios in which a proxy pattern can be applied.

A remote proxy provides a local representative for an object in a different address space.Providing interface for remote resources such as web service or REST resources or EJB using RMI.

A virtual proxy creates expensive object on demand.

A protection proxy controls access to the original object.Protection proxies are useful when objects should have different access rights.

A smart reference is a replacement for a bare pointer that performs additional actions when an object is accessed.

Adding a thread-safe feature to an existing class without changing the existing class's code. This is useful when you do not have the freedom to fix thread-safety issues in a third-party library.

Use BufferedReader and BufferedWriter to increase efficiency because IO performance depends a lot from the buffering strategy.

Favor NIO over old IO because the old I/O is stream oriented and uses a
blocking IO, whereas the NIO (aka New IO) is Buffer oriented, uses Non
blocking I/O and has selectors.

To avoid issues like "the file reading works in Windows but not in Unix", use java.io.File constructors instead of working with file names as String. The FilenameUtils class in Apache. commons IO handles issues relating to operating systems.

Q. When would you use a decorator design pattern?A. The Decorator pattern should be used when:

Object responsibilities and behaviors should be dynamically modifiable

Concrete implementations should be decoupled from responsibilities and behaviors

Q. Can you write a class using the decorator design pattern to
print numbers from 1-10, and then decorators that optionally print only
even or odd numbers?A. This can be done by sub classing or via inheritance. But too much sub classing is definitely a bad thing. Composition is more powerful than sub classing as you can get different behaviors via decorating at run time. Here is the code, you will realize the power of object composition and why GoF design patterns favors composition to inheritance.

Step 6: Finally, a sample file that shows how the above classes can be decorated at run time using object composition to get different outcomes. Additional implementations of NextNumber like PrintPrimeNumbers, PrintMultiplesOfSeven, PrintFibonacciNumber, etc can be added using the Open-Closed design principle.

May 16, 2013

Java I/O Interview Questions and Answers

Java I/O interview questions are popular with some interviewers, especially the job you are applying for requires file processing. The Java NIO (stands for New I/O) package was introduced in Java version 4 and NIO2 was released in Java version 7.

Q. Why do you need to favor NIO (i.e. New I/O) to old I/O? A. A stream-oriented I/O (or old I/O) deals with data one byte at a time. An input stream produces one byte of data, and an output stream consumes one byte of data. It is very easy to create filters and chain several filters together so that each one does its part in what amounts to a single, sophisticated
processing mechanism. On the flip side, stream-oriented I/O is often rather slow.

A block-oriented I/O system deals with data in blocks. The New I/O consumes a block of data in one step. Processing data by the block can be much faster than processing it by byte (i.e. streamed). But block-oriented I/O lacks some of the elegance and simplicity of stream-oriented I/O.

The examples below read the content of this file and print it to a console. In reality, you can do whatever you want once you have read it like mapping to Java POJOs, etc.

Q. Can you describe different ways in which you can read data from a file?A. Files can be read a number of ways. Here is a sample file named person.csv in a folder c:\temp\tlm.

The service class methods can be protected by declaring the following in your spring context file where the methods reside.

<!-- comment this line locally to bypass seurity access control in development. But don't check this in commented as security will be turned off -->
<security:global-method-security secured-annotations="enabled" pre-post-annotations="enabled" jsr250-annotations="enabled"/>

Once declared, you can protect your service class methods as shown below.

May 9, 2013

Notepad++ plugin for viewing JSON data and other benefits

Q. Why is Notepad++ is a very handy tool?A. Notepad++ is a very handy developer tool. Here are some of the productivity benefits of Notepad++.

to view source code as a free and light weight source code editor that runs in Windows environment. You can also use it to view log files and other text data.

It can be used for syntax highlighting, line numbering, search and replace with or without regular expressions, modified file detection, file search, etc.

Working with JSON data

Recently, I had to work a lot with JSON data, and stumbled across a very handy plugin for Notepadd++ called "JSONViewer Notepad++", which allows you to format JSON data and view them as a tree structure. Here are the simple setps to install this plugin.

Step 4: When you open up your Notepad++, you will see the "JSONViewer" sub menu under the "Plugins" on the top main menu. You can format the JSON text by highlighting your text first and then clicking on Plugins --> JSON Viewer --> Format JSON.

Extracting data

If you had some tab limited data like

FirstName Surname Age
John Smith 25
Peter Warren 35
Lisa Jenkins 28

and if you want to extract out all the first names, you can invoke the column mode by pressing "Alt" key and then select the "FirstName" column with the mouse selection. Alternatively, you can use the "Alt+Shift+Arrow Key" to select the column entries you need. These options are under the "Edit --> Column Mode" on the top menu.

You can use the "Find In files" tab to replace multiple files. I use this feature to search for files that has certain text (i.e. grep). This is a bulk find and replace capability. It also has other bulk features like "File --> Save all" open files.

Split views, file comparisons and synchronized scrolling

Select View --> Move/Clone Current Document to create split views to compare two files side by side. On split views you can select Plugins --> Compare --> compare to highlight the differences or select View --> Synchronize Vertical Scrolling to scroll both views in tandem.

Language feature to highlight reserved key words

Setting the language of the file like Java, JavaScript, etc via Language --> J --> Java or JavaScript will help you easily visually distinguish between functions, reserved words, comments, text, and other types of symbols and expressions in your code.

The latest Notepad++ version should have a TextFx menu to tidy up HTML tags to be XHTML compliant. You can also automate any monotonous tasks you do repeatedly with the Macros. It can also synchronize files with Subversion (i.e. SVN). Hence Notepad++ is a powerful open-source developer tool that makes you more productive.

May 7, 2013

Spring security pre-authentication scenario - Part1

Spring security pre-authentication scenario assumes that a valid authenticated user is available via either Single Sign On (SSO) applications like Siteminder, Tivoli, etc or a X509 certification based authentication. The Spring security in this scenario will only be used for authorization.

Step 3: The servlet filter configured above will make use of a spring context file like ssoContext.xml to define the authorization sequences. The ssoContext file can be imported via the applicationContext.xml file bootstrapped via the web.xml file.

Step 4: The ssoContext.xml is defined below showing how the user can be retrieved from HTTP header SM_USER for site minder and passed to your own implementation to retrieve the roles (aka authorities). All the classes configured below are Spring classes except for the UserDetailsServiceImpl, which is used to retrieve the authorities.

Step 5: Define the class UserDetailsServiceImpl class that needs to implement the Spring interface UserDetailsService and the required method "public UserDetails loadUserByUsername(String username)". The returned model object "UserDetails" is a Spring class as well.

Pay attention to how the different artifacts are wired up using both Spring xml files and annotations. The Spring beans can be wired either by nameor type. @Autowired by default is a type driven injection.@Autowired is Spring annotation, while @Inject is a JSR-330 annotation. @Inject is equivalent to @Autowired or @Autowired(required=true). @Qualifier spring annotation can be used to further fine-tune auto-wiring. There may be a situation when you create more than one bean of the same type and want to wire only one of them with a property, in such case you can use @Qualifier annotation along with @Autowired to remove the confusion by specifying which exact bean will be wired.

Step 3: The myapp-applicationContext.xml snippet with config to enable auto-scan with annotations.

As you can see, the controller, service, and DAO layer classes not configured here as they are scanned via annotation (i,e @Component is the parent annotation from which the other annotations like @Service, @Resource, @Repository etc are defined)

The annotations shown above allow you to declare beans that are to be picked up by autoscanning with <context:component-scan/> or @ComponentScan.

The @Configuration annotation was designed as the replacement for XML configuration files. The @Configuration annotated classes can still able to use annotated(@Autowired, @Inject etc.) fields and properties to request beans (and even other @Configuration annotated beans too) from the container. Here is an example of how Apache Camel is wired up using the @configuration and @bean annotations in Step 2.