Tuesday, February 02, 2016

A few weeks back I went out to the Tampa Java (JVM) User Group meeting to watch Venkat Subramaniam speak about Lambdas and functional programming with Java 8. I have already read his book, Functional Programming in Java: Harnessing the Power Of Java 8 Lambda Expressions, on the topic but I could not pass up on the opportunity to see him speak in person. I make every attempt to see speakers/authors/programmers, such as Venkat, in person because for me it is like a spiritual experience for my (programmer) soul. Many people attend places of worship to hear a community leader speak about known truths, to hear the truth from someone who believes passionately in these truths of life. People are there to reaffirm their beliefs and walk away from a positive experience with a sense of improved strength. It is invigorating and it revives their personal life to help them with their days ahead. The same thing happens when I attend events like the Tampa Java (JVM) User Group meeting last month. Venkat's presentation and live programming invigorated my (programmer) soul as he spoke about the truths we developers face on a daily basis. He elaborated on how we approach simple problems with an abundance of complexity, how we are gluttons for punishment as we write multi-threaded code to guarantee we will struggle to recreate that production defect in our tests as we run the debugger. I truly appreciate the passionate speakers/authors/programmers that evangelize the truth with an incredible amount of passion for the craft of programming, who invigorate our (programmer) souls so we can continue on our programming mission for a lifetime of learning and personal growth.

Friday, August 17, 2012

Hazelcast offers many features to developers working with systems of all shapes and sizes. Hazelcast's implementations of distributed Queue and Map offer possibilities of running Web applications while operational systems are down for maintenance or unavailable. Here is an approach to solving such a problem with Hazelcast.

Let's start with our business model. We sell widgets online and through our call center. We also service order and customer related issues in our call center. Every evening, our widget product database is inaccessible because of some maintenance tasks. Here is one place where Hazelcast fits into the picture.

Before our maintenance window, we preload the widget product data into a Hazelcast distributed Map named "widgets". Each map entry has a key, which is our unique widget id, and a value, consisting of a Widget object. Now, our Web applications can access product data from our Hazelcast distributed Map "widgets", rather than relying on our product database.

When an order is created, Hazelcast's distributed Queue comes into play. The Web applications can submit an order and it will be passed into our "orders" distributed Queue. Here we can keep our orders ready and waiting until our database comes back online. We configure our Queue to be backed by a Map so our "orders" elements, consisting of Order objects, can be persisted in our intermediary storage before being processed when our target datastore comes off of maintenance mode.

A big part of this solution is to have a great services or middleware layer that can take on the burden of working with our distributed Maps and Queues. This will create a good abstraction of our backend datastore and distributed data from our Web applications. As a service oriented architecture is a common approach to abstracting a backend from Web applications, this solution does introduce some complexity into the services or middleware architecture.

Here is a high level diagram of the setup:

With a mature services layer, our call center and online applications can create orders, search through our widgets that exist in our "widgets" distributed Map and even verify and validate orders that are currently only in our distributed Queue "orders" or our Map that is backing our Queue. This type of solution does take some up front planning, but 24/7 business comes at a cost.

Tuesday, November 15, 2011

I created a simple Annotation, Report, the other day to handle some simple logging/reporting tasks. The intent is to create a simple way to annotate methods that should be logged or reported during invocation. Here is what the Annotation looks like:

I have an interface, Menu, which all Menu implementations will implement:

public interface Menu {
public void doSomething(String session);
}

Here is an example implementation of Menu, MyMenu, with a Report Annotation on the doSomething(...) method. Platform is an enum that describes whether or not to log/report to Oracle, SQL Server or both systems as in the case below:

Here is a block of code that creates a MyMenu implementation of Menu and a Proxy instance of Menu, and gets the "doSomething" method and checks whether or not the Report Annotation is present on the Method, m:

evaluates to true in the second try/catch block with the normal instance of MyMenu, menu but it evaluates to false in the first try/catch block with the proxy instance of MyMenu, menuProxy. The above code also includes a reference to a class, MyLogger, that implements InvocationHandler. This is where the evaluation of

if(m.isAnnotationPresent(Report.class))

is imperative because it completes my implementation goal. Inside the MyLogger class, I have implemented the

Just some strange things that I noticed when trying to do something cool with Java Annotations. Everything works if I put a Report Annotation on the Menu interface's doSomething(...) method declaration, but that solution does not allow me to have individual data related to each implementations' doSomething(...) method.

Sunday, August 21, 2011

I have been spending some time learning and working with Gradle. Gradle is a Groovy based build and dependency tool that supports declarative builds and build-by-convention. Gradle is an open source project and it is used by many companies and other open source Java and Groovy projects.

First, make sure you have Java 1.5+ installed and download Gradle from gradle.org. Create a GRADLE_HOME variable in your environment referencing where you have unpacked your download. Next, add GRADLE_HOME/bin to your PATH variable. To confirm Gradle is installed and working, enter gradle --version at a command prompt.

Once you know Gradle is installed properly, you can now setup your project. To put together a simple project, all you need is a /src/main/java directory with your packages and Java source and a build.gradle file. I highly recommend adding a /src/test/java directory with your test packages and Java unit testing source because everyone needs to write tests.

I have been using Gradle with a Java project that I have been working on. I mostly have been depending on Gradle to handle my dependency resolution, running tests and packaging my project. So, what kind of work goes into using Gradle for my needs? It is quite simple, it is a build file, build.gradle, with a few lines of Groovy.

The above entries let Gradle know to use the Java plugin, add Maven Central to the repositories and add commons-logging as a dependency for compilation and JUnit as a dependency of the test compilation. Within the project directory enter gradle build at a command prompt. The source will be compiled, the tests will be run with reports and you will have a packaged Jar file. It is ridiculously simple to get started. The Gradle team has done a great job. Have fun writing code.

Sunday, July 31, 2011

For quite a while, I have been an advocate of using the Spring framework for most, if not all, of my recent Java projects. There are many advantages of using Spring for features such as dependency injection, transaction management, and integration (Web services, messaging) because Spring creates an abstraction from the underlying implementations allowing developers to focus less on the 'how-to' and more on the problem at hand.

When working with AVAYA Dialog Designer, an Eclipse based IDE for developing IVR applications, I have found that I rely heavily on shared Java libraries that have been already been tested and proven in our enterprise. With an increasing number of these shared libraries being written with Spring, I found that I needed to bring the Spring context into the AVAYA runtime environment.

Because, the AVAYA IVR environment is a Java Web environment, we can start by locating the web.xml file in the project under the WEB-INF directory. We can add a ContextLoaderListener to the application's web.xml file and we are on our way.

Next, we need to investigate how we are configuring our application context file to ensure that our application context loads our dependencies' beans into this application's context. In the xml above, I have set the contextConfigLocation parameter to point to the location of the IVR application's context file. In ivr-application-context.xml, I have configured a few beans that I have written in my Dialog Designer project.

...
<bean id="myBean" class="beans.MyBean"/>
...

If we have other application context files, we can import them in ivr-application-context.xml or add them to the contextConfigLocation.

One feature of the Dialog Designer environment is the capability of writing your own custom Java classes, like MyBean, and overriding some of the current application's classes represented by the graphical call flow nodes. Each node is backed by a Java source file. For example, we can override the requestBegin(SCESession mySession) method of the generated AVAYA framework classes and use the SCESession object to access the ServletContext in order to get access to Spring's WebApplicationContext.

Now we have access to an instance of MyBean from Spring's context. In the end, we can see how easy it is to bring Spring into an existing Java Web framework that really does not need to exist within the Spring container. The dependent libraries can be accessed and we can wire any other custom beans if needed. Of course, things would be much easier if the AVAYA framework integrated with Spring, but if you find yourself working with a similar contraint, I hope this post illustrates how easy Spring makes integrating with other technologies.

Thursday, September 02, 2010

I have been a big fan of Spring's PropertyPlaceholderConfigurer since 2006 when I could wire up a datasource bean, or any bean for that matter, with just some references to properties that I knew were going to be in place. A snippet from a Spring context file for example:

Now, I can provide PropertyPlaceholderConfigurer with a .properties file location(s) or I could depend on the properties existing as part of the runtime, like when using JBoss Application Server's property service. Then one day, I ran into a bit of an issue. Well, now I have an application with a properties file that has datasource connection information for each development region, DEV, TEST and PROD and the region is a 'prefix' on each property.

... and so on. If you are packaging .properties files into your archive(.war, .jar, .ear), this does help your code be a bit more portable but I usually configure properties outside of an archive but we can't always have our way. So, now we have a special class that reads the properties file and the region variable from SystemProperties as the region variable, SDLC_REGION, is set in each development region as a VM argument.

-DSDLC_REGION=DEV

And that works great. We can leave our Spring context alone and everything works like we need it to. But, I am always trying to reduce classes or utilities(.jar files) that are no longer needed in our applications. So, I took a look back into Spring 2.5's PropertyPlaceholderConfigurer and low and behold, there is a better way to do things. Check it out. Here is my Spring context file now:

Now, the VM argument, SDLC_REGION, exists in each environment and it can be a part of our PropertyPlaceholderConfigurer expression. We can now load the correct property for each development region from the packaged .properties file without depending on our utility class anymore. Really cool stuff and again, beautiful work from the people at SpringSource.

Thursday, June 10, 2010

When you decide to incorporate a distributed data grid as part of your application architecture, a product's scalability, reliability, cost and performance are key considerations that will help you make your decision. Another key consideration will be the accessibility of the data. One nice feature of Hazelcast that I have been working with lately is distributed queries. In simple terms, distributed queries provide an API and syntax that allow a developer to query for entries that exist in a Hazelcast distributed map. Let's look at a very simple example.

In the demo project (link at the bottom) I have one object, a test case and the Hazelcast 1.8.4 jar file as a project dependency. Below is the class that will be put into a distributed map, ReportData. Once we have a distributed map that is full of ReportData entries, we can use Hazelcast's distributed query to find our ReportData entries.

Nothing too complex in the code above. It is just an object that implements Serializable and that contains a few different types (String, Boolean and Date) of attributes. This class will work nicely to help demonstrate Hazelcast's distributed query API and syntax. I omitted the getters and setters for brevity.

In the test code, I created ~50,000 ReportData objects using a for loop and put them into the "ReportData" distributed map. I used the index, 0..50,000, for the ReportData's id and the reportName is set to "Report " + index. I did a few other things, so we could have a few different dates represented in our map's entries. Check out the demo project for more detail.

Below, we have a case where we are building the predicate programmatically using the EntryObject to fetch all ReportData where the id is greater than 49900 and the endDate attribute of ReportData is between two dates, startDate and endDate. I included the code below to show how I am creating a few dates to use in the predicate that eventually gets passed into the map.values(predicate) method.

Getting data from your Hazelcast distributed map using the distributed query API and query syntax is pretty straight forward. Most of these queries ran for about 500 milliseconds to 2 seconds in my IDE. The power and performance comes from the ability to query objects or map entries that are in memory rather than always relying on a round trip to your RDBMS. Distributed queries are an important feature that make Hazelcast a great tool that can help offset the workload of your RDBMS. With Hazelcast and a good knowledge of your enterprise data, you can implement a simple and effective solution that will easily scale to as many Hazelcast nodes your hardware can support. The demo project can be downloaded here. For more information, check out Hazelcast's website or visit the project's home at Google Code.

Sunday, April 11, 2010

The other day, while working on a Java project, I realized that I could implement an application requirement quite quickly by extending a current domain object and adding an attribute. This is an enterprise wide, industry standard domain object model so I couldn't just add the attribute to the domain object without cutting through some red tape. Plus, it was an attribute that I needed for my application and it would most likely have no use in other projects. So, I had something like this:

Now, I could fulfill the requirement with ease because I now had a Policy object with the extra attribute I needed, myNewProperty, all in the MyPolicy object. I could now handle the Policy returned from the Web service call and pass it into the MyPolicy constructor, do some work to create an instance of MyPolicy with a Policy object, derive the value of myNewProperty and then send it on to a view for example. Nice, I think that will work for my application.

Later, I thought about how nice it would have been if the Policy object was implemented with Groovy. Then I could take advantage of Groovy's metaprogramming features like propertyMissing. When I have propertyMissing in my language arsenal, I can create the Policy object like so:

In the implementation above, the propertyMissing(String name, value) method is called when trying to set a value, myNewProperty, that doesn't exist in the Policy object. The propertyMissing(String name) method is called when trying to get a property, myNewProperty, that doesn't exist in the Policy object. By default, the propertyMissing(String name) will return null if the value was never initialized or never dynamically created. Yeah, it is an incredible feature. What is really nice is that I didn't have to create the MyPolicy object at all! In the Groovy world I could write the following:

This would save me some time and now because the Policy object is expandable, other applications that need to extend the Policy object, for application purposes, will be able to utilize these features. I am convinced, Groovy IS productivity for Java.

Monday, May 11, 2009

The other day I wanted to prove the power of Groovy to a few more core Java developers. I sat down and played with a little script that I think proves the power, or ease of use, of Groovy while having fun with number theory. My goal was to have a Collection that contains Prime Numbers calculated from a given range of numbers x to y, or in Groovy syntax x..y. I am taking a few things for granted with this script. For example, I know that 2 is the lowest Prime Number (it helps simplify the algorithm) and that 1 is not a Prime Number. So here's the script:

We have our range of numbers t and we are capturing the non-Prime numbers and adding them to our Collection v. The final line of the script is doing a lot of groovy work for us that would take a few more lines of code with plain Java syntax. What is it doing for us? It is taking our range of numbers t and subtracting (removing) our non-Prime numbers collected in v. This line is also removing the number 1 because we know that 1 is not a Prime Number. Finally, it is printing the result like so:

Very cool stuff. Now, I know that many of us are rarely building a Collection of Prime Numbers and I know this script does not do things in a timely manner with a range like 1..10000 (it took a couple of minutes), but I am sure this feature of Collections in Groovy can be utilized in many ways in my development.

All we need to do is run our main method in our EchoClient class and we will get something like:

...
Sending: test 1
Returned: test 1
...

That's all folks. Obviously, this example, along with our previously posted service, is not solving complex problems that we might face in our current Java development endeavors but it provides us with peak into Apache CXF and opens up more people to a technology that I enjoy working with.

Tuesday, September 02, 2008

Web Service development has come a long way. I had some experience with XFire a few years ago and thought Web Service development could not get any easier. Then XFire became the Apache CXF project. I wanted to take a peak at Apache CXF so the other night I put together a simple echo Web Service. Alright, if you would like to follow along, you can create a Web Service in about 10 minutes:

Above, we set a few properties of the ServerFactoryBean that we have instantiated. We basically set up our service interface, our address (URL) to our service endpoint and the implementation class. We can run this as a normal Java application and browse over to http://localhost:9000/echo?wsdl and see our WSDL generated for our echo service.

There is nothing too complicated about the echo service. It is basically a Java interface, an implementation class and a few annotations. This is a great start for any developer who wants to start developing Web Services without getting overwhelmed with the full capabilities and features of Apache CXF or the spec. For more information, head on over to http://cxf.apache.org.