J2EE Related Items on Java.nethttp://www.java.net/taxonomy/term/55/all?type=All
enAutomating Deployment of the Summit ADF Sample Application to the Oracle Java Cloud Servicehttp://www.java.net/blog/bleonard/archive/2015/06/04/automating-deployment-summit-adf-sample-application-oracle-java-cloud-service
<!-- | 0 --><p><a href="https://wbrianleonard.wordpress.com/2015/06/04/automating-deployment-of-the-summit-adf-sample-application-to-the-oracle-java-cloud-service/" rel="bookmark">Automating Deployment of the Summit ADF Sample Application to the Oracle Java Cloud&nbsp;Service</a></p>
http://www.java.net/blog/bleonard/archive/2015/06/04/automating-deployment-summit-adf-sample-application-oracle-java-cloud-service#commentsBlogsDeploymentIDEJ2EEJava EnterpriseThu, 04 Jun 2015 12:25:17 +0000bleonard932006 at http://www.java.netRMI Unplugedhttp://www.java.net/blog/kcpeppe/archive/2015/01/27/rmi-unpluged
<!-- | 0 --><p>I've just finished tuning a client's application where one of the items on the table was to find the source of calls to System.gc(). Using <a href=http://www.jclarity.com/censum>Censum</a> made easy work of understanding the source of the calls. The team I was working with not only missed that these calls to System.gc() where creating havoc with their end users experience, they didn't even realize that something, some where was messing things up!</p>
<p>It's not to say that developers are completely oblivious to the issue. Developers have gotten better about not placing calls to System.gc() in their code, but they often still don't see the call being made in their GC logs. One of the confusing causes of unwanted calls to <code class="prettyprint">System.gc()</code> comes from Java (typically Java EE) applications that have remote objects, and therefore use <a href=http://www.oracle.com/technetwork/java/javase/tech/index-jsp-136424.html>RMI</a>. In the early days these calls came into applications at the rate of 1 a minute and as you can imagine, it caused a lot of consternation for those analyzing the performance of Java EE applications making use of Session or Entity beans. After much discussion, the call rate was reduced to the current rate of once an hour. Better but it can still be disruptive as much as it can be useful. Lets expose a little bit of how the JVM works by digging into how RMI plays with the Garbage Collectors.</p>
<p>The Remote Procedure Call (RPC) protocol (originally written in C) allowed one process to call an exposed function in another process. The function in the other process was expected to manage all of it’s resourses on it own. For example, if it allocated memory, it was responsible for freeing it. The only piece of management that relied on the “client” doing the right thing was to shutdown the connection. Even then, a broken socket would cause the RPC library to close down the connection on its own.</p>
<p>Move forward and we can see that the RPC style of removing is re-implemented in Java with RMI and CORBA. Together, these protocols expose classes that you can instantiate and interact with, in a remote JVM. Without getting in too deeply into how all this works one can quickly see that this leaves us with a problem. In Java, all objects that are unreachable will be garbage collected. For a remote JVM to access an object in our heap we must hold a pointer to that object in the remote JVM's behalf. This implies that to clean up, the remote JVM has to some how signal when it's finished with the remote object. Given the nature of the resources needed to make all this work, the implementors of RMI decided that they should force a call to a collection of tenured space on a regular basis. In practical terms this means, RMI makes regular calls to System.gc().</p>
<p>If you look deep inside RMI you’ll run across the class <code class="prettyprint">ObjectTable</code>. That class is responsible for retaining the roots of all objects that support a remote call. All of the objects added to this table are wrapped in a <code class="prettyprint">WeakReference</code>. Further more, <code class="prettyprint">ObjectTable</code> manages it's own <codeReferenceQueue</code> using an internal <code class="prettyprint">Reaper</code> (implements <code class="prettyprint">Runnable</code>). Additionally, the connection classes from java.io all rely on finalization to clean up resources such as file handles. In addition, to ObjectTable holding onto the roots for the Distributed Garbage Collector (DGC - which cleans up for RMI), all newly disconnected objects must be identified by a regular garbage collection cycle, cleaned out of their respective <code class="prettyprint">ReferenceQueue</code>, and then reclaimed with a subsequent collection.</p>
<p>To support all of this, you can find a reference to the class <code class="prettyprint">GC.LatencyRequest</code>. The class <code class="prettyprint">sun.misc.GC</code> class is a singleton whose primary function is to make sure that a collection of tenured has run within a fixed period of time (otherwise known as gcLatency). The class is supported by the native method <code class="prettyprint">GC.LatencyRequest gcLatencyRequest()</code>. This native call returns an estimate of the elapsed time since the last GC. In the case of the Serial, Parallel, CMS and G1 collectors, it returns the elapsed time since tenured space was collected. If that elapsed time exceeds the latency target, then a <code class="prettyprint">System.gc()</code> is called. As long as there are objects in <code class="prettyprint">ObjectTable</code>, the <code class="prettyprint">GC::run()</code> method (listing below) will remain active.</p>
<pre class="prettyprint"><code>public void run() {<br /> for (;;) {<br /> long l;<br /> synchronized (lock) {<br /> l = latencyTarget;<br /> if (l == NO_TARGET) {<br /> /* No latency target, so exit */<br /> GC.daemon = null;<br /> return;<br /> }<br /><br /> long d = maxObjectInspectionAge();<br /> if (d >= l) {<br /> /* Do a full collection. There is a remote possibility<br /> * that a full collection will occurr between the time<br /> * we sample the inspection age and the time the GC<br /> * actually starts, but this is sufficiently unlikely<br /> * that it doesn't seem worth the more expensive JVM<br /> * interface that would be required.<br /> */<br /> System.gc();<br /> d = 0;<br /> }<br /><br /> /* Wait for the latency period to expire,<br /> * or for notification that the period has changed<br /> */<br /><br /> try {<br /> lock.wait(l - d);<br /> } catch (InterruptedException x) {<br /> continue;<br /> }<br /> }<br /> }<br /> }</code></pre>
<p>The default value for latencyTarget is 3600000ms. Oddly enough, you can override this by setting your own latency target in code as follows.</p>
<pre class="prettyprint"><code> public static void main(String[] args) throws Throwable {<br /> System.out.println( “latency target: “ + GC.currentLatencyTarget());<br /> System.out.println("turn on GC");<br /> LatencyRequest request = GC.requestLatency( 1000);<br /> System.out.println( “latency target: “ GC.currentLatencyTarget());<br /> Thread.sleep(6000);<br /> System.out.println("turn off GC");<br /> request.cancel();<br /> Thread.sleep( 6000);<br /> }</code></pre>
<p>If you run this code with the flags -verbose:gc and -XX:+PrintGCDetails, you will see a call to System.gc() made once a second for 6 seconds. The output you should see will look something like;</p>
<pre class="prettyprint"><code>latency target: 0<br />turn on GC<br />latency target: 1000<br />[GC (System.gc()) 5243K->792K(251392K), 0.0010124 secs]<br />[Full GC (System.gc()) 792K->582K(251392K), 0.0045390 secs]<br />[GC (System.gc()) 3203K->678K(251392K), 0.0006147 secs]<br />[Full GC (System.gc()) 678K->571K(251392K), 0.0108860 secs]<br />[GC (System.gc()) 1882K->571K(251392K), 0.0006402 secs]<br />[Full GC (System.gc()) 571K->568K(251392K), 0.0060488 secs]<br />[GC (System.gc()) 568K->568K(251392K), 0.0003195 secs]<br />[Full GC (System.gc()) 568K->568K(251392K), 0.0018367 secs]<br />[GC (System.gc()) 568K->568K(251392K), 0.0005905 secs]<br />[Full GC (System.gc()) 568K->568K(251392K), 0.0028363 secs]<br />[GC (System.gc()) 568K->568K(251392K), 0.0003654 secs]<br />[Full GC (System.gc()) 568K->568K(251392K), 0.0018081 secs]<br />turn off GC</code></pre>
<p>If <code class="prettyprint">ObjectTable</code> is becomes empty, it will reset the GC latency to 0 which will overwrite your settings and turn off the GC::run method. If another RMI request is made, the target latency will be set to the preset value for RMI. This implies that any settings we make in code are subject to change at any time. A safer way to set the GC latency for RMI is to set the system properties sun.rmi.dgc.client.gcInterval and sun.rmi.dgc.server.gcInterval.</p>
<p>Generally I don’t find these extraneous calls to <code class="prettyprint">System.gc()</code> to be useful so I often tune down the values to as much as <code class="prettyprint">Long.MAX_LONG</code> milliseconds. However, if you are using RMI and tenured is never collected, it’s likely that you want to schedule a Full collection simply to recycle things like file descriptors. File descriptors in particular a special as they are processed with finalization, which means a full GC must run in order for these objects to be inserted into the finalization <code class="prettyprint">ReferenceQueue</code>.</p>
<p>Final note, there are other sources of JDK internal calls to <code class="prettyprint">System.gc()</code>. These include:<br />
* Use of <code class="prettyprint">FileChannel.map</code><br />
* Use of <code class="prettyprint">ByteBuffer.allocateDirect()</code>.<br />
The later will make the call if reference processing isn’t keeping up and you are hitting MaxDirectMemorySize.</p>
<p>Yet Another Final Note, Yes, I know the Java API spec says that a JVM can ignore calls to <code class="prettyprint">System.gc()</code> if it so chooses to. The reality is, in OpenJDK all calls to <code class="prettyprint">System.gc()</code> are honoured unless -XX:+DisableExplicityGC is set on the command line. If not, the calls are synchronized resulting in all of the calls being stacked up all running one right after the other.... I have an expercise in my <a href=http://www.kodewerk.com/workshop.html>performance tuning workshop</a> that does just this and believe me, it messes things up very badly. People are always looking at me very badly when they finally sort out what is happening. But then, if they were to use <a href=http://www.jclarity.com/censum/>Censum</a>. they'd see the problem in seconds!</p>
<p>In a note from Charlie Hunt... he kindly pointed out that I forgot to mention that using -XX:+ExplicitGCInvokesConcurrent is an option for CMS and G1 to avoid the single threaded Full GC that typically is triggered with a System.gc() is called. Thanks for reminding me Charlie.</p>
http://www.java.net/blog/kcpeppe/archive/2015/01/27/rmi-unpluged#commentsBlogsDistributedEJBJ2EEJ2SEOpen JDKPerformanceTue, 27 Jan 2015 16:44:05 +0000kcpeppe931533 at http://www.java.netWhat if your logs were in JSON?http://www.java.net/blog/timboudreau/archive/2015/01/18/what-if-your-logs-were-json
<!-- | 0 --><p>Bunyan is a NodeJS library that rethinks logging in some really useful ways. I wrote a <a href="http://j.mp/1AOGgUe">Java port you can use in your applications</a>.</p>
<p>In particular, with Bunyan, logs are JSON - and Bunyan comes with a great filtering and analysis tool.</p>
<p>The Java port uses some innovative techniques to make logging simple and foolproof - in particular, a use of <code class="prettyprint">AutoCloseable</code> to make a logging code simple and foolproof.</p>
<p>A walkthrough <a href="http://j.mp/1AOGgUe">on my real blog on timboudreau.com</a>.</p>
http://www.java.net/blog/timboudreau/archive/2015/01/18/what-if-your-logs-were-json#commentsBlogsDeploymentDistributedGlassFishJ2EEJ2SEJava EnterpriseJava Web Services and XMLNetBeansOpen SourceWeb Development ToolsMon, 19 Jan 2015 02:43:19 +0000timboudreau931458 at http://www.java.netValidating Oracle Java Cloud Service HAhttp://www.java.net/blog/bleonard/archive/2015/01/07/validating-oracle-java-cloud-service-ha
<!-- 581 | 49 --><img src="/images/people/brian_leonard.jpg" border="0", align="left" /><h1>Validating Oracle Java Cloud Service HA</h1>
<p>One of my favorite applications from my Sun Java System Application Server days was the <a href="http://docs.oracle.com/cd/E19644-01/817-5444/gsghajsp.html" target="_blank">Cluster JSP Sample Application</a>. In a cluster configuration fronted by a load balancer, this simple JSP provides a nice summary of which cluster node handled the request as well as the ability to test session failover. Therefore, why not try it on the <a href="https://cloud.oracle.com/java" target="_blank">Oracle Java Cloud Service</a> (JCS):</p>
<p><img src="http://weblogs.java.net/sites/default/files/ClusterHAJSPSample.JPG" alt="Cluster - HA JSP Sample" /></p>
<p>I have a JCS instance with 2 managed server nodes, Alpha01J_server_1 and Alpha01J_server_2:</p>
<p><img src="http://weblogs.java.net/sites/default/files/InstanceAlpha01JCS.JPG" alt="Instance Alpha01JCS" /></p>
<p>My <a href="https://docs.oracle.com/cloud/latest/jcs_gs/JSCUG/GUID-31F00F2C-221F-4069-8E8A-EE48BFEC53A2.htm#JSCUG-GUID-82213C7B-BD8C-4B07-9117-17631FE25399" target="_blank">Load Balancer Policy</a> is set to Round Robin. If I start a new session (by using a different browser), the load balancer will direct me to managed server Alpha01J_server_2 hosted on alpha01jcs-wls-2:</p>
<p><img src="http://weblogs.java.net/sites/default/files/ClusterHAJSPSampleIEMarked.JPG" /></p>
<p>So I have easily confirmed that the load balancer is working as expected. Let's add some data to the session hosted on Alpha01J_server_1 hosted on alpha01jcs-wls-1:</p>
<p><img src="http://www.java.net/sites/default/files/ClusterHAJSPSampleWithSessionMarked.jpg" alt="Cluster - HA JSP Sample with session data m" /></p>
<p>Now let's simulate a failure condition. To do that, I will SSH into alpha01jcs-wls-1 and kill the Alpha01J_server_1 WebLogic server process:</p>
<p><img src="http://weblogs.java.net/sites/default/files/SSHKill.JPG" alt="SSH Kill Alpha01JCS" /></p>
<p>When I then return to my application and reload the page, my execution server has failed over to alpha01jcs-wls-2 and my session data remains intact:</p>
<p><img src="http://weblogs.java.net/sites/default/files/ClusterHAJSPSampleWithSessionFailoverMarked_0.JPG" alt="Cluster - HA JSP Sample with Session Failover" /></p>
<p>Through the WebLogic Administration Console, I can verify all sessions have failed over to Alpha01J_server_2. </p>
<p><img src="http://weblogs.java.net/sites/default/files/ServerFailoverStatus.JPG" alt="Server Failover Status" /></p>
<p>You'll also notice that the node manager has already started my failed Alpha01J_server_1 server. </p>
<p>One other point to note, the application needs to inform WebLogic that its session should be replicated. This is done via the <a href="http://docs.oracle.com/middleware/1212/wls/WBAPP/weblogic_xml.htm#WBAPP587" target="_blank">session-descriptor</a> in the WebLogic deployment descriptor, so I added the following weblogic.xml to the sample application:</p>
<code class="prettyprint">&lt;?xml version = '1.0' encoding = 'windows-1252'?&gt;<br />&lt;weblogic-web-app xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;<br /> xsi:schemaLocation=&quot;http://xmlns.oracle.com/weblogic/weblogic-web-app <a href="http://xmlns.oracle.com/weblogic/weblogic-web-app/1.5/weblogic-web-app.xsd&quot;<br" title="http://xmlns.oracle.com/weblogic/weblogic-web-app/1.5/weblogic-web-app.xsd&quot;<br">http://xmlns.oracle.com/weblogic/weblogic-web-app/1.5/weblogic-web-app.x...</a> /> xmlns=&quot;http://xmlns.oracle.com/weblogic/weblogic-web-app&quot;&gt;<br /> &lt;session-descriptor&gt;<br /> &lt;persistent-store-type&gt;REPLICATED_IF_CLUSTERED&lt;/persistent-store-type&gt;<br /> &lt;/session-descriptor&gt;<br />&lt;/weblogic-web-app&gt;</code></br></p>
<p>Finally, if you wish to experiment with the application yourself, I have uploaded it here: <a href="/sites/all/modules/pubdlcnt/pubdlcnt.php?file=http://weblogs.java.net/sites/default/files/clusterjsp.zip&nid=931379" target="_blank">custerjsp.war</a>.</p>
<p>Enjoy</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="/sites/all/modules/pubdlcnt/pubdlcnt.php?file=http://www.java.net/sites/default/files/clusterjsp.zip&nid=931379">clusterjsp.zip</a></td><td>2.42 KB</td> </tr>
</tbody>
</table>
http://www.java.net/blog/bleonard/archive/2015/01/07/validating-oracle-java-cloud-service-ha#commentsBlogsJ2EEJava EnterpriseJSPWed, 07 Jan 2015 20:39:35 +0000bleonard931379 at http://www.java.netAuth ID Overload with domain .id (Indonesia) and Meruvian Yama OAuth2 Serverhttp://www.java.net/blog/fthamura/archive/2014/11/22/auth-id-overload-domain-id-indonesia-and-meruvian-yama-oauth2-server
<!-- 1269 | 0 --><img src="/images/people/frans_thamura.jpg" border="0", align="left" /><p>Indonesia has released the domain .id to public with Indonesia ID, and more more website using this domain, the domain is costly around $50/year. </p>
<p>in Another world, with this domain, we can make the domain become an identity portal. </p>
<p>And yes we are the one that using it (<a href="http://www.merv.id" title="http://www.merv.id">http://www.merv.id</a>), and we also release the OAuth Server, take a look <a href="https://github.com/meruvian/yama" title="https://github.com/meruvian/yama">https://github.com/meruvian/yama</a>, a 2-in-1 project that can become an MVC platform and a OAuth Server. Compatible with JENI Education Program (<a href="http://www.jeni.or.id" title="http://www.jeni.or.id">http://www.jeni.or.id</a>).</p>
<p>So, now, starting vocational highschool in Indonesia can learn how to create JavaEE Application (we will move from Struts2/REST to total JAX-RX), and the security we adopt from Spring, and extend to several feature. </p>
<p>The live version of the Meruvian Yama now run under <a href="http://www.merv.id" title="http://www.merv.id">http://www.merv.id</a> and <a href="http://www.cybers.id" title="http://www.cybers.id">http://www.cybers.id</a>, and hope more more people will use and contribute to make it better.</p>
<p><img target="blank" src="http://weblogs.java.net/sites/default/files/Screenshot_from_2014-11-23_135752.png" width="60%" height="60%"></p>
<p><img target="blank" src="http://weblogs.java.net/sites/default/files/mervid_0.png" width="60%" height="60%"></p>
<p>FYI: The different of the MervID and CybersID are have a more complete profile and a DISC Psychotest Profiling, which both project are a separate project under PAJAJE project (still hosted in SF.NET).</p>
<p>We also create a Yama-showcase as playground to "access" to our Merv.ID, Facebook and G+, we adopt Admin LTE Bootstrap UI.</p>
<p><img target="blank" src="http://weblogs.java.net/sites/default/files/yama.png" width="60%" height="60%"></p>
<p><img target="blank" src="http://weblogs.java.net/sites/default/files/yama-news.png" width="60%" height="60%"></p>
<p>For Android Client to consume to both Yama News Showcase, called Midas Project Showcase, you can download from Google Play, or direct to his URL <a href="https://play.google.com/store/apps/details?id=org.meruvian.midas.showcase" title="https://play.google.com/store/apps/details?id=org.meruvian.midas.showcase">https://play.google.com/store/apps/details?id=org.meruvian.midas.showcase</a></p>
<p>The source code of the MiDas Project in <a href="https://github.com/meruvian/midas-droid" title="https://github.com/meruvian/midas-droid">https://github.com/meruvian/midas-droid</a></p>
<p>So, if you want to create an authentication server, we are glad to help, and we are welcome any feedback, just contact me frans @ meruvian.com</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://www.java.net/sites/default/files/Screenshot_from_2014-11-23_135752.png">Screenshot_from_2014-11-23_135752.png</a></td><td>1.11 MB</td> </tr>
<tr class="even"><td><a href="http://www.java.net/sites/default/files/yama.png">yama.png</a></td><td>51.68 KB</td> </tr>
<tr class="odd"><td><a href="http://www.java.net/sites/default/files/yama-api.png">yama-api.png</a></td><td>82.59 KB</td> </tr>
<tr class="even"><td><a href="http://www.java.net/sites/default/files/yama-news.png">yama-news.png</a></td><td>62.23 KB</td> </tr>
<tr class="odd"><td><a href="http://www.java.net/sites/default/files/mervid_0.png">mervid.png</a></td><td>76.95 KB</td> </tr>
</tbody>
</table>
http://www.java.net/blog/fthamura/archive/2014/11/22/auth-id-overload-domain-id-indonesia-and-meruvian-yama-oauth2-server#commentsAdopt a JSRBloggingBlogsBusinessCommunityEducationGlobal Education and LearningIdentity ManagementJ2EEOpen SourceWeb ApplicationsSun, 23 Nov 2014 07:19:12 +0000fthamura931123 at http://www.java.netStateless Session for multi-tenant application using Spring Securityhttp://www.java.net/blog/sgdev-blog/archive/2014/09/07/stateless-session-multi-tenant-application-using-spring-security
<div class="field field-type-filefield field-field-thumb-100x70">
<div class="field-items">
<div class="field-item odd">
<img class="imagefield imagefield-field_thumb_100x70" width="551" height="549" alt="" src="http://www.java.net/sites/default/files/sgdev-blog/stateless_session.png?1410119703" /> </div>
</div>
</div>
<script src="https://google-code-prettify.googlecode.com/svn/loader/run_prettify.js"></script><p><a href="http://1.bp.blogspot.com/-K7eFNUwqLaE/VAAggA3nbdI/AAAAAAAAB8s/Fha1VrMG4eQ/s1600/stateless_session.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://1.bp.blogspot.com/-K7eFNUwqLaE/VAAggA3nbdI/AAAAAAAAB8s/Fha1VrMG4eQ/s1600/stateless_session.png" height="316" width="320" /></a>Once upon a time, <a href="http://sgdev-blog.blogspot.sg/2014/02/stateful-and-stateless-application.html">I published one article explaining the principle to build Stateless Session</a>. Coincidentally, we are working on the same task again, but this time, for a multi-tenant application. This time, instead of building the authentication mechanism ourselves, we integrate our solution into Spring Security framework.</p>
<p>This article will explain our approach and implementation.</p>
<p><b><span style="font-size: x-large;">Business Requirement</span></b></p>
<p>We need to build authentication mechanism for an Saas application. Each customer access the application through a dedicated sub-domain. Because the application will be deployed on the cloud, it is pretty obvious that Stateless Session is the preferred choice because it allow us to deploy additional instances without hassle.</p>
<p>In the project glossary, each customer is one site. Each application is one app. For example, site may be Microsoft or Google. App may be Gmail, GooglePlus or Google Drive. A sub-domain that user use to access the application will include both app and site. For example, it may looks like <i>microsoft.mail.somedomain.com</i> or <i>google.map.somedomain.com</i></p>
<p>User once login to one app, can access any other apps as long as they are for the same site. Session will be timeout after a certain inactive period.</p>
<p><b><span style="font-size: x-large;">Background</span></b></p>
<p><b><span style="font-size: large;">Stateless Session</span></b></p>
<p>Stateless application with timeout is nothing new. Play framework has been stateless from the first release in 2007. We also switched to Stateless Session many years ago. The benefit is pretty clear. Your Load Balancer do not need stickiness; hence, it is easier to configure. As the session in on the browser, we can simply bring in new servers to boost capacity immediately. However, the disadvantage is that your session is not so big and not so confidential anymore.</p>
<p>Comparing to stateful application where the session is store in server, stateless application store the session in HTTP cookie, which can not grow more than 4KB. Moreover, as it is cookie, it is recommended that developers only store text or digit on the session rather than complicated data structure. The session is stored in browser and transfer to server in every single request. Therefore, we should keep the session as small as possible and avoid placing any confidential data on it. To put it short, stateless session force developer to change the way application using session. It should be user identity rather than convenient store.</p>
<p><b><span style="font-size: large;">Security Framework</span></b></p>
<p>The idea behind Security Framework is pretty simple, it helps to identify the principle that executing code, checking if he has permission to execute some services and throws exceptions if user does not. In term of implementation, security framework integrate with your service in an AOP style architecture. Every check will be done by the framework before method call. The mechanism for implementing permission check may be filter or proxy.</p>
<p>Normally, security framework will store principal information in the thread storage (ThreadLocal in Java). That why it can give developers a static method access to the principal anytime. I think this is somethings developers should know well; otherwise, they may implement permission check or getting principal in some background jobs that running in separate threads. In this situation, it is obviously that the security framework will not be able to find the principal.</p>
<p><b><span style="font-size: large;">Single Sign On</span></b></p>
<p>Single Sign On in mostly implemented using Authentication Server. It is independent of the mechanism to implement session (stateless or stateful). Each application still maintain their own session. On the first access to an application, it will contact authentication server to authenticate user then create its own session.</p>
<p><b><span style="font-size: x-large;">Food for Thought</span></b></p>
<p><b><span style="font-size: large;">Framework or build from scratch</span></b></p>
<p>As stateless session is the standard, the biggest concern for us is to use or not to use a security framework. If we use, then Spring Security is the cheapest and fastest solution because we already use Spring Framework in our application. For the benefit, any security framework provide us quick and declarative way to declare assess rule. However, it will not be business logic aware access rule. For example, we can define that only Agent can access the products but we can not define that one agent can only access some products that belong to him.</p>
<p>In this situation, we have two choices, building our own business logic permission check from scratch or build 2 layers of permission check, one is only role based, one is business logic aware. After comparing two approaches, we chose the latter one because it is cheaper and faster to build. Our application will function similar to any other Spring Security application. It means that user will be redirected to login page if accessing protected content without session. If the session exist, user will get status code 403. If user access protected content with valid role but unauthorized records, he will get 401 instead.</p>
<p><b><span style="font-size: large;">Authentication</span></b></p>
<p>The next concern is how to integrate our authentication and Authorization mechanism with Spring Security. A standard Spring Security application may process a request like below:</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-riMGTXAvG3U/VABBbEe1NRI/AAAAAAAAB9I/wPVcRpnjxnI/s1600/standard_spring_security.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-riMGTXAvG3U/VABBbEe1NRI/AAAAAAAAB9I/wPVcRpnjxnI/s1600/standard_spring_security.png" height="283" width="320" /></a></div>
<p></p>
<div class="separator" style="clear: both; text-align: center;">
</div>
<p>
The diagram is simplified but still give us a raw idea how things work. If the request is login or logout, the top two filters update the server side session. After that, another filter help check access permission for the request. If the permission check success, another filter will help to store user session to thread storage. After that, controller will execute code with the properly setup environment.</p>
<p>For us, we prefer to create our authentication mechanism because the credential need to contain website domain. For example, we may have Joe from Xerox and Joe from WDS accessing Saas application. As Spring Security take control of preparing authentication token and authentication provider, we find it is cheaper to implement login and logout ourselves at the controller level rather than spending effort on customizing Spring Security.</p>
<p>As we implement stateless session, there are two works we need to implements here. At first, we need to to construct the session from cookie before any authorization check. We also need to update the session time stamp so that the session is refreshed every time browser send request to server.</p>
<p>Because of the earlier decision to do authentication in controller, we face a challenge here. We should not refresh the session before controller executes because we do authentication here. However, some controller methods is attached with the View Resolver that write to output stream immediately. Therefore, we have no chance to refresh cookie after controller being executed. Finally, we choose a slightly compromised solution by using HandlerInterceptorAdapter. This handler interceptor allow us to do extra processing before and after each controller method. We implement refreshing cookie after controller method if the method is for authentication and before controller methods for any other purpose. The new diagram should look like this</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-RVMoBGzyaL8/VAP_lpoKJ1I/AAAAAAAACAI/hq0AjYggEoU/s1600/stateless_spring_security.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-RVMoBGzyaL8/VAP_lpoKJ1I/AAAAAAAACAI/hq0AjYggEoU/s1600/stateless_spring_security.png" height="314" width="320" /></a></div>
<p><b><span style="font-size: large;">Cookie</span></b></p>
<p>To be meaningful, user should have only one session cookie. As the session always change time stamp after each request, we need to update session on every single response. By HTTP protocol, this can only be done if the cookies match name, path and domain.</p>
<p>When getting this business requirement, we prefer to try new way of implementing SSO by sharing session cookie. If every application are under the same parent domain and understand the same session cookie, effectively we have a global session. Therefore, there is no need for authentication server any more. To achieve that vision, we must set the domain as the parent domain of all applications.</p>
<p>To illustrate this global session, let come back to the earlier example where we have two applications that contain the domain name as <span style="background-color: white; color: #222222; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 18.4799995422363px;"><i>microsoft.mail.somedomain.com</i> or <i>google.map.somedomain.com</i></span><br />
<span style="background-color: white; color: #222222; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 18.4799995422363px;"><br /></span><br />
For the session cookie to be global, we will set the domain as <i>somedomain.com</i>. Obviously, the session cookie can be seen and maintained by both applications as long as they share the same secret key to sign.</p>
<p><b><span style="font-size: large;">Performance</span></b></p>
<p>Theoretically, stateless session should be slower. Assuming that the server implementation store session table in memory, passing in JSESSIONID cookie will only trigger a one time read of object from the session table and optional one time write to update last access (for calculating session timeout). In contrast, for stateless session, we need to calculate the hash to validate session cookie, load principal from database, assigning new time stamp and hash again.</p>
<p>However, with today server performance, hashing should not add too much delay in server response time. The bigger concern is querying data from database, and for this, we can speed up by using cache.</p>
<p>In best case scenario, stateless session can perform closely enough to stateful if there is no DB call made. In stead of loading from session table, which maintained by container, the session is loaded from internal cache, which is maintained by application. In the worst case scenario, requests are being routed to many different servers and the principal object is stored in many instances. This add additional effort to load principal to the cache once per server. While the cost may be high, it occurs only once in a while.</p>
<p>If we apply stickiness routing to load balancer, we should be able to achieve best case scenario performance. With this, we can perceive the stateless session cookie as similar mechanism to JSESSIONID but with fall back ability to reconstruct session object.</p>
<p><b><span style="font-size: x-large;">Implementation</span></b></p>
<p>I have published the sample of this implementation to https://github.com/tuanngda/sgdev-blog repository. Kindly check the stateless-session project. The project requires a mysql database to work. Hence, kindly setup a schema following build.properties or modify the properties file to fit your schema.</p>
<p>The project include maven configuration to start up a tomcat server at port 8686. Therefore, you can simply type mvn cargo:run to start up the server.</p>
<p>Here is the project hierarchy:</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-9VsfiTlER58/VAxfhTibDbI/AAAAAAAACM4/xegdHNVLxAo/s1600/project.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-9VsfiTlER58/VAxfhTibDbI/AAAAAAAACM4/xegdHNVLxAo/s1600/project.png" /></a></div>
<p>
I packed both Tomcat 7 server and the database so that it work without any other installation except MySQL. The Tomcat configuration file TOMCAT_HOME/conf/context.xml contain the DataSource declaration and project properties file.</p>
<p>Now, let look closer at the implementation</p>
<p><b><span style="font-size: large;">Session</span></b></p>
<p>We need two session objects, one represent the session cookie, one represent the session object that we build internally in Spring security framework:</p>
<pre class="prettyprint"><code>public class SessionCookieData {<br /> <br /> private int userId;<br /> <br /> private String appId;<br /> <br /> private int siteId;<br /> <br /> private Date timeStamp;<br />}</code></pre><br />
<br />
and</p>
<pre class="prettyprint"><code>public class UserSession {<br /> <br /> private User user;<br /> <br /> private Site site;<br /><br /> public SessionCookieData generateSessionCookieData(){<br /> return new SessionCookieData(user.getId(), user.getAppId(), site.getId());<br /> }<br />}</code></pre><br />
<br />
With this combo, we have the objects to store session object in cookie and memory. The next step is to implement a method that allow us to build session object from cookie data.</p>
<pre class="prettyprint"><code>public interface UserSessionService {<br /> <br /> public UserSession getUserSession(SessionCookieData sessionData);<br />}</code></pre><br />
<br />
Now, one more service to retrieve and generate cookie from cookie data.</p>
<pre class="prettyprint"><code>public class SessionCookieService {<br /><br /> public Cookie generateSessionCookie(SessionCookieData cookieData, String domain);<br /><br /> public SessionCookieData getSessionCookieData(Cookie sessionCookie);<br /><br /> public Cookie generateSignCookie(Cookie sessionCookie);<br />}</code></pre><br />
<br />
Up to this point, We have the service that help us to do the conversion</p>
<p>Cookie --> <span style="line-height: 1.15;">SessionCookieData --> UserSession</span><br />
<span style="line-height: 1.15;"><br /></span><br />
<span style="line-height: 1.15;">and</span><br />
<span style="line-height: 1.15;"><br /></span><br />
<span style="line-height: 1.15;">Session --> SessionCookieData --> Cookie</span></p>
<p>Now, we should have enough material to integrate stateless session with Spring Security framework</p>
<p><b><span style="font-size: large;">Integrate with Spring security</span></b></p>
<p>At first, we need to add a filter to construct session from cookie. Because this should happen before permission check, it is better to use <i>AbstractPreAuthenticatedProcessingFilter</i></p>
<pre class="prettyprint"><code>@Component(value="cookieSessionFilter")<br />public class CookieSessionFilter extends AbstractPreAuthenticatedProcessingFilter {<br /> <br />...<br /> <br /> @Override<br /> protected Object getPreAuthenticatedPrincipal(HttpServletRequest request) {<br /> SecurityContext securityContext = extractSecurityContext(request);<br /> <br /> if (securityContext.getAuthentication()!=null&nbsp; <br /> &amp;&amp; securityContext.getAuthentication().isAuthenticated()){<br /> UserAuthentication userAuthentication = (UserAuthentication) securityContext.getAuthentication();<br /> UserSession session = (UserSession) userAuthentication.getDetails();<br /> SecurityContextHolder.setContext(securityContext);<br /> return session;<br /> }<br /> <br /> return new UserSession();<br /> }<br /> ...<br /> <br />}</code></pre><br />
<br />
The filter above construct principal object from session cookie. The filter also create a <i>PreAuthenticatedAuthenticationToken </i>that will be used later for authentication. It is obviously that Spring will not understand this Principal. Therefore, we need to provide our own <i>AuthenticationProvider </i>that manage to authenticate user based on this principal.</p>
<pre class="prettyprint"><code>public class UserAuthenticationProvider implements AuthenticationProvider {<br />@Override<br /> public Authentication authenticate(Authentication authentication) throws AuthenticationException {<br /> PreAuthenticatedAuthenticationToken token = (PreAuthenticatedAuthenticationToken) authentication;<br /><br /> UserSession session = (UserSession)token.getPrincipal();<br /><br /> if (session != null &amp;&amp; session.getUser() != null){<br /> SecurityContext securityContext = SecurityContextHolder.getContext();<br /> securityContext.setAuthentication(new UserAuthentication(session));<br /> return new UserAuthentication(session);<br /> }<br /><br /> throw new BadCredentialsException("Unknown user name or password");<br /> }<br />}</code></pre><br />
<br />
This is Spring way. User is authenticated if we manage to provide a valid Authentication object. Practically, we let user login by session cookie for every single request.</p>
<p>However, there are times that we need to alter user session and we can do it as usual in controller method. We simply overwrite the SecurityContext, which is setup earlier in the pre-authentication filter.</p>
<p><span style="line-height: 1.15;"></span><br />
<pre class="prettyprint"><code>public ModelAndView login(String login, String password, String siteCode) throws IOException{<br /> <br /> if(StringUtils.isEmpty(login) || StringUtils.isEmpty(password)){<br /> throw new HttpServerErrorException(HttpStatus.BAD_REQUEST, "Missing login and password");<br /> }<br /> <br /> User user = authService.login(siteCode, login, password);<br /> if(user!=null){<br /> SecurityContext securityContext = SecurityContextHolder.getContext();<br /> UserSession userSession = new UserSession();<br /> userSession.setSite(user.getSite());<br /> userSession.setUser(user);<br /> securityContext.setAuthentication(new UserAuthentication(userSession));<br /> }else{<br /> throw new HttpServerErrorException(HttpStatus.UNAUTHORIZED, "Invalid login or password");<br /> }<br /> <br /> return new ModelAndView(new MappingJackson2JsonView());<br /> <br /> }</code></pre><br />
<br />
<span style="font-size: large;"><b>Refresh Session</b></span></p>
<p>Up to now, you may notice that we have never mentioned the writing of cookie. Provided that we have a valid <i>Authentication </i>object and our <i>SecurityContext </i>contain the <i>UserSession</i>, it is important that we need to send this information back to browser.</p>
<p>Before the <i>HttpServletResponse </i>is generated, we must generate and attach the session cookie to it. This new session cookie, which has similar domain and path will replace the older session cookie that the browser is keeping.</p>
<p>As discussed above, refreshing session is better to be done after controller method because we implement authentication at this layer. However, there is a challenge caused by ViewResolver of Spring MVC. Sometimes, it writes to OutputStream so soon that any attempt to add cookie to response will be useless.</p>
<p>After consideration, we come up with a compromise solution that refresh session before controller methods for normal requests and after controller methods for authentication requests. To know whether requests is for authentication, we place an newly defined annotation at the authentication methods.</p>
<pre class="prettyprint"><code> @Override<br /> public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {<br /> if (handler instanceof HandlerMethod){<br /> HandlerMethod handlerMethod = (HandlerMethod) handler;<br /> SessionUpdate sessionUpdateAnnotation = handlerMethod.getMethod().getAnnotation(SessionUpdate.class);<br /> <br /> if (sessionUpdateAnnotation == null){<br /> SecurityContext context = SecurityContextHolder.getContext();<br /> if (context.getAuthentication() instanceof UserAuthentication){<br /> UserAuthentication userAuthentication = (UserAuthentication)context.getAuthentication();<br /> UserSession session = (UserSession) userAuthentication.getDetails();<br /> persistSessionCookie(response, session);<br /> }<br /> }<br /> }<br /> return true;<br /> }<br /><br /> @Override<br /> public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler,<br /> ModelAndView modelAndView) throws Exception {<br /> if (handler instanceof HandlerMethod){<br /> HandlerMethod handlerMethod = (HandlerMethod) handler;<br /> SessionUpdate sessionUpdateAnnotation = handlerMethod.getMethod().getAnnotation(SessionUpdate.class);<br /> <br /> if (sessionUpdateAnnotation != null){<br /> SecurityContext context = SecurityContextHolder.getContext();<br /> if (context.getAuthentication() instanceof UserAuthentication){<br /> UserAuthentication userAuthentication = (UserAuthentication)context.getAuthentication();<br /> UserSession session = (UserSession) userAuthentication.getDetails();<br /> persistSessionCookie(response, session);<br /> }<br /> }<br /> }<br /> }</code></pre><br />
<br />
<span style="font-size: x-large;"><b>Conclusion</b></span></p>
<p>The solution works well for us but we do not have the confident that this is the best practices possible. However, it is simple and does not cost us much effort to implement (around 3 days include testing).</p>
<p>Kindly feedback if you have any better idea to build stateless session with Spring.</p>
http://www.java.net/blog/sgdev-blog/archive/2014/09/07/stateless-session-multi-tenant-application-using-spring-security#commentsBloggingBlogsJ2EEJava EnterpriseSecurityWeb ApplicationsMon, 08 Sep 2014 03:31:45 +0000sgdev-blog930555 at http://www.java.netDistributed Crawlinghttp://www.java.net/blog/sgdev-blog/archive/2014/08/28/distributed-crawling
<div class="field field-type-filefield field-field-thumb-100x70">
<div class="field-items">
<div class="field-item odd">
<img class="imagefield imagefield-field_thumb_100x70" width="794" height="626" alt="" src="http://www.java.net/sites/default/files/sgdev-blog/black_widow.jpeg?1409279250" /> </div>
</div>
</div>
<p><a href="http://4.bp.blogspot.com/-dYRRSOy58ZU/U_hKNzgB78I/AAAAAAAAB3c/xsiXQ9_r5v4/s1600/black_widow.jpeg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://4.bp.blogspot.com/-dYRRSOy58ZU/U_hKNzgB78I/AAAAAAAAB3c/xsiXQ9_r5v4/s1600/black_widow.jpeg" height="252" width="320" /></a>Around 3 months ago, I have posted <a href="http://sgdev-blog.blogspot.sg/2014/05/how-to-build-java-based-cloud.html">one article explaining our approach and consideration to build Cloud Application</a>. From this article, I will gradually share our practical design to solve this challenge.</p>
<p>As mentioned before, our final goal is to build a <a href="http://en.wikipedia.org/wiki/Software_as_a_service">Saas </a>big data analysis application, which will deployed in <a href="http://aws.amazon.com/">AWS </a>servers. In order to fulfill this goal, we need to build distributed crawling, indexing and distributed training systems.</p>
<p>The focus of this article is how to build the distributed crawling system. The fancy name for this system will be&nbsp;<b>Black Widow</b>.</p>
<p><b><span style="font-size: large;">Requirements</span></b></p>
<p>As usual, let start with the business requirement for the system. Our goal is to build a scalable crawling system that can be deployed on the cloud. The system should be able to function in an unreliable, high-latency network and can recover automatically from a partial hardware or network failure.</p>
<p>For the first release, the system can crawl from 3 kind of sources, <a href="http://datasift.com/">Datasift</a>, <a href="https://dev.twitter.com/docs/api/streaming">Twitter API</a> and Rss feeds. The data crawled back are called <b>Comment</b>. The Rss crawlers suppose to read public sources like website or blog. It is free of charge. DataSift and Twitter both provide proprietary APIs to access their streaming service. Datasift charges its users by comment count and the complexity of CSLD (Curated Stream Definition Language, their own query language). Twitter, in the other hand, offers free Twitter Sampler streaming.</p>
<p>In order to do cost control, we need to implement mechanism to limit the amount of comments crawled from commercial source like Datasift. As Datasift provided Twitter comment, it is possible to have single comment coming from different sources. At the moment, we did not try to eliminate and accept it as data duplication. However, this problem can be eliminated manually by user configuration (avoid choosing both Twitter and Datasift Twitter together).</p>
<p>For future extension, the system should be able to link up related comments to from a conversation.</p>
<p><span style="font-size: large;"><b>Food for Thought</b></span></p>
<p><b>Centralized Architecture</b></p>
<p>Our first thought when getting requirement is to build the crawling on the nodes, which we called <b>Spawn </b>and let the hub, which we called&nbsp;<b>Black Widow</b>&nbsp;to manage the collaboration of effort among nodes. This idea was quickly accepted by team members as it allows the system to scale well with the hub doing limited work.</p>
<p>As any other centralized system, Black Widow suffers from <a href="http://en.wikipedia.org/wiki/Single_point_of_failure">single point of failure</a> problem. To help easing this problem, we allow the node to function independently for a short period after losing connection to Black Widow. This will give the support team a breathing room to bring up backup server.</p>
<p>Another bottle neck in the system is data storage. For the volume of data being crawled (easily reach few thousands records per seconds), <i>NoSQL </i>is clearly the choice for storing the crawled comments. We have experiences working with <i>Lucene </i>and <i>MongoDB</i>. However, after research and some minor experiments, we choose<i> </i><a href="http://cassandra.apache.org/"><i>Cassandra</i> </a>as the <i>NoSQL </i>database.</p>
<p>With that few thoughts, we visualize the distributed crawling system to be build following this prototype:</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-k0YLGRb3f5s/U_hgPdFTORI/AAAAAAAAB4Y/Fd1iKIDwp1Y/s1600/black_widow_system.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-k0YLGRb3f5s/U_hgPdFTORI/AAAAAAAAB4Y/Fd1iKIDwp1Y/s1600/black_widow_system.png" height="587" width="640" /></a></div>
<p>In the diagram above, Black Widow, or the hub is the only server that has access to the SQL database system. This is where we store the configuration for crawling. Therefore, all the Spawns, or crawling nodes are fully stateless. It simply wakes up, registers itself to Black Widow and does the assigned jobs. After getting the comments, the Spawn stores it to Cassandra cluster and also push it to some queues for further processing.</p>
<p><b>Brainstorming of possible issues</b></p>
<p>To explain the design to non-technical people, we like to relate the business requirement to a similar problem in real life so that it can be easier to understand. The similar problem we choose would be collaborating of efforts among volunteers.</p>
<p>Imagine if we need to do a lot of preparation work for the upcoming Olympic and decide to recruit volunteers all around the world to help. We do not know volunteers but the volunteers know our email, so they can contact us to register. Only then, we know their emails and may send tasks to them through email. We would not want to send one task to two volunteers or left some tasks unattended. We want to distribute the tasks evenly so that no volunteers are suffering too much.</p>
<p>Due to cost issue, we would not contact them through mobile phone. However, because email is less reliable, when sending out tasks to volunteers, we would request a confirmation. The task is consider assigned only when the volunteer replied with confirmation.</p>
<p>With above example, the volunteers represent Spawn nodes while email communication represent unreliable and high latency network. Here are some problems that we need to solve:</p>
<p><b>1/ Node failure</b></p>
<p>For this problems, the best way is to check regularly. If a volunteer stop responding to the regular progress check email, the task should be re-assign to someone else.</p>
<p><b>2/ Optimization of tasks assigning</b></p>
<p>Some tasks are related. Therefore assigning related tasks to the same person can help to reduce total effort. This happen with our crawling as well because some crawling configurations have similar search terms, grouping &nbsp;them together to share the streaming channel will help to reduce final bill.</p>
<p>Another concern is the fairness or ability to distribute the amount of works evenly among volunteers. The simplest strategy we can think of is Round Robin but with a minor tweak by remembering earlier assignments. Therefore, if a task is pretty similar to the tasks we assigned before, the task can be skipped from Round Robin selection and directly assign to the same volunteer.</p>
<p><b>3/ The hub is not working</b></p>
<p>If due to some reasons, our email server is down and we cannot contact volunteer any more, it is better to let the volunteers stop working on the assigning tasks. The main concern here is over-running of cost or wasted efforts. However, stopping working immediately is too hasty as temporary infrastructure issue may cause the communication problem.</p>
<p>Hence, we need to find a reasonable amount of time for the node to continue functioning after being detached from the hub.</p>
<p><b>4/ Cost control</b></p>
<p>Due to business requirement, there are two kinds of cost control that we need to implement. First is the total of comments being crawled per crawler and second is the total of comments crawled by all crawlers belong to the same user.</p>
<p>This is where we have a debate about the best approach to implement cost control. It is very straight forward to implement the limit for each crawler. We can simply pass this limit to the Spawn node and it will automatically stop the crawler when the limit is reached.</p>
<p>However, for the limit per user, it is not so straight forward and we have two possible approaches. For the simpler choice, we can send all the crawlers of one user to the same node. Then, similar to the earlier problem, the Spawn node knows &nbsp;the amount of comments collected and stops all crawlers when limit reached. This approach is simple but it limits the ability to distribute jobs evenly among nodes. The alternative approach is to let all the nodes retrieve and update a global counter. This approach creates huge network traffic internally and add considerable delay to comment processing time. </p>
<p>At this point, we temporarily choose the global counter approach. This can be considered again if the performance become a huge concern.</p>
<p><b>5/ Deploy on the cloud</b></p>
<p>As any other Cloud application, we can not put too much trust in the network or infrastructure. Here is how we make our application conform to the check-list mentioned in last article:</p>
<ul>
<li><i>Stateless</i>: Our spawn node is stateless but the hub is not. Therefore, in our design, the nodes do actual work and the hub only collaborates efforts.</li>
<li><i>Idempotence</i>: We implement <i>hashCode </i>and <i>equal </i>methods for every crawler configuration. We store the crawler configurations in the Map or Set. Therefore, the crawler configuration can be sent multiple times without any other side effect. Moreover, our node selection approach ensure that the job will be sent to the same node.</li>
<li><i>Data Access Object</i>: We apply the JsonIgnore filter on every model objects to make sure no confidential data flying around in the network.</li>
<li><i>Play Safe</i>: We implement health-check API for each node and the hub itself. The first level of support will get notified immediately when anything wrong happened.</li>
</ul>
<div>
<b>6/ Recovery</b></div>
<div>
</div>
<div>
We try our best to make the system heal itself from partial failure. There are some type of failure that we can recover from:</div>
<div>
<ul>
<li><i>Hub failure</i>: Node register itself to the hub when it start up. From then, it is the one way communication when only the hub send jobs to node and also poll for status update. The node is consider detached if it failed to get any contact from Hub for a pre-defined period. If a node is detached, it will clear all the job configurations and start registering itself to the hub again. If the incident is caused by hub failure, a new hub will fetch crawling configurations from database and start distributing jobs again. All the existing jobs on Spawn nodes will be cleared when the Spawn node go to detached mode.</li>
<li><i>Node failure</i>: When hub fail to poll a node, it will do a hard reset by removing all working jobs and re-distribute from beginning again to the working nodes. This re-distribution process help to ensure optimized distribution.</li>
<li><i>Job failure</i>: There are two kind of failures happened when the hub do sending and polling jobs. If a job is failed in the polling process but the Spawn node is still working well, Black Widow can re-assign the job to the same node again. The same thing can be done if the job sending failed.&nbsp;</li>
</ul>
</div>
<p>
<b><span style="font-size: large;">Implementation</span></b></p>
<p><b>Data Source and Subscriber</b></p>
<p>In the initial thought, each crawler can open it own channel to retrieve data but this does not make sense any more when inspecting further. For Rss, we can scan all URLs once and find out the keywords that may belong to multiple crawlers. For Twitter, it supports up to 200 search terms for one single query. Therefore, it is possible for us to open single channel that serve multiple crawlers. For Datasift, it is quite rare, but due to human mistake or luck, it is possible to have crawlers with identical search terms.</p>
<p>This situation lead us to split out crawler to two entities: subscriber and data source. Subscriber is in charge of consuming the comments while data source is in charge of crawling the comments. With this design, if there are two crawlers with similar keywords, a single data source will be created to serve two subscribers, each processing the comments their own ways.</p>
<p>Data source will be created when and only when no similar data source exist. It starts working when having the first subscriber subscribe to it and retire when the last subscriber unsubscribe from it. With the help of Black Widow to send similar subscribers to the same node, we can minimize the amount of data sources created and indirectly, minimize the crawling cost.</p>
<p><b>Data Structure</b></p>
<p>The biggest concern of data structure is Thread Safe issue. In the Spawn node, we must store all running subscribers and data sources in memory. There are a few scenarios that we need to modify or access these data:</p>
<ul>
<li>When a subscriber hit the limit, it automatically unsubscribe from data source, which may lead to deactivation of data source.</li>
<li>When Black Widow send a new subscriber to Spawn nodes.&nbsp;</li>
<li>When Black Widow send a request to unsubscribe an existing subscriber.&nbsp;</li>
<li>Health check API expose all running subscribers and data sources.&nbsp;</li>
<li>Black Widow regularly polls the status of each assigned subscriber.</li>
<li>The Spawn node regularly checks and disables orphan subscribers (subscriber which is not polled by Black Widow).</li>
</ul>
<div>
Another concern of data structure is idempotence of operations. Any of operation above can be missing or being duplicated. To handle this problem, here is our approach</div>
<div>
<ul>
<li>Implement <i>hashCode </i>and <i>equals </i>method for every subscriber and data source.&nbsp;</li>
<li>We choose the <i>Set </i>or <i>Map </i>to store collection of subscribers and data sources. For records with identical hash code, <i>Map </i>will replace the record when there is new insertion but <i>Set </i>will skip the new record. Therefore, if we use <i>Set</i>, we need to ensure new records can replace old record.&nbsp;</li>
<li>We use <i>synchronized </i>in data access code.</li>
<li>If Spawn node receive a new subscriber that similar to existing subscriber, it will compare and prefer to update existing subscriber instead of replacing. This avoid the process of unsubscribing and subscribing identical subscribers, which may interrupt data source streaming.</li>
</ul>
<div>
<b>Routing</b></div>
</div>
<div>
</div>
<div>
As mentioned before, we need to find a routing mechanism that serve two purposes:</div>
<div>
<ul>
<li>Distribute the jobs evenly among Spawn nodes.</li>
<li>Route similar jobs to the same nodes.</li>
</ul>
<div>
We solved this problem by generating an unique representation of each query &nbsp;named <i>uuid</i>. After that, we can use a simple modular function to find out the note to route:</div>
</div>
<div>
</div>
<p></p>
<div>
<span class="Apple-tab-span" style="white-space: pre;"> </span>int size = activeBwsNodes.size();</div>
<div>
<div>
<span class="Apple-tab-span" style="white-space: pre;"> </span>int hashCode = uuid.hashCode();</div>
<div>
<span class="Apple-tab-span" style="white-space: pre;"> </span>int index = hashCode % size;</div>
<div>
<span class="Apple-tab-span" style="white-space: pre;"> </span>assignedNode = activeBwsNodes.get(index);</div>
</div>
<div>
</div>
<div>
With this implementation, subscribers with similar <i>uuid </i>will always be sent to the same node and each node has equals chance of being selected to serve a subscriber.&nbsp;</div>
<div>
</div>
<div>
This whole practice can be screwed up when there is change to the collection of active Spawn nodes. Therefore, Black Widow must clear up all running jobs and reassign from beginning whenever there is a node change. However, node change should be quite rare in production environment.</div>
<div>
</div>
<div>
<b>Handshake</b></div>
<div>
</div>
<div>
Below is the sequence diagram of Black Widow and Node collaboration</div>
<div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-HGBw6KzgBgM/U_7AYcZ047I/AAAAAAAAB7s/rETGMqCtYkY/s1600/node_sequence_diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-HGBw6KzgBgM/U_7AYcZ047I/AAAAAAAAB7s/rETGMqCtYkY/s1600/node_sequence_diagram.png" height="539" width="640" /></a></div>
<div>
</div>
<p>Black Widow does not know Spawn node. It wait for the Spawn node to register itself to the Black Widow. From there, Black Widow has the responsibility to poll the node to maintain connectivity. If Black Widow fail to poll a node, it will remove the node from the its container. The orphan node will eventually go to detached mode because it is not being polled any more. In this mode, Spawn node will clear existing jobs and try to register itself again.</p>
<p>The next diagram is the subscriber life-cycle</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-CwnJ32eUP_8/U_7DHCZsiPI/AAAAAAAAB8A/FMe_-yltWy4/s1600/job_sequence_diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-CwnJ32eUP_8/U_7DHCZsiPI/AAAAAAAAB8A/FMe_-yltWy4/s1600/job_sequence_diagram.png" height="588" width="640" /></a></div>
<p></p>
<div class="separator" style="clear: both; text-align: center;">
</div>
<p>
Similar to above, Black Widow has the responsibility of polling the subscribers it send to Spawn node. If a subscriber is not being polled by Black Widow anymore, Spawn node will treat the subscriber as orphan and remove it. This practice help to eliminate the threat of Spawn node running obsoleted subscriber.</p>
<p>On Black Widow, when a subscriber polling fails, it will try to get a new node to assign the job. If the Spawn node of the subscriber still available, it is likely that the same job will go to the same node again due to our routing mechanism we used.</p>
<p><b>Monitoring</b></p>
<p>In a happy scenario, all the subscribers are running, Black Widow is polling and nothing else happen. However, this is not likely to happen in real life. There will be changes in Black Widow and Spawn nodes from time to time, triggered by various events.</p>
<p>For Black Widow, there will be changes under following circumstances:</p>
<ul>
<li>Subscriber hit limit</li>
<li>Found new subscriber</li>
<li>Existing subscriber disabled by user</li>
<li>Polling of subscriber fails</li>
<li>Polling of Spawn node fails</li>
</ul>
<div>
To handle changes, Black Widow monitoring tool offers two services: hard reload and soft reload. Hard Reload happen on node change while Soft Reload happen on subscriber change. Hard Reload process takes back all running jobs, redistribute from beginning over available nodes. Soft Reload process removes obsoleted jobs, assigns new jobs and re-assigns failed jobs.</div>
<div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-DLnwkxCQwGU/U_7PU2STkqI/AAAAAAAAB8Q/-jNNCUniKVI/s1600/black_widow_monitor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-DLnwkxCQwGU/U_7PU2STkqI/AAAAAAAAB8Q/-jNNCUniKVI/s1600/black_widow_monitor.png" height="640" width="409" /></a></div>
<div>
</div>
<div>
Compare to Black Widow, the monitoring of Spawn node is simpler. The two main concerns are maintaining connectivity to Black Widow and removing orphan subscribers.</div>
<div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-F8DVhEsdLrc/U_7SczQJACI/AAAAAAAAB8c/rjroRLP0pQ8/s1600/spawn_monitor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-F8DVhEsdLrc/U_7SczQJACI/AAAAAAAAB8c/rjroRLP0pQ8/s1600/spawn_monitor.png" height="537" width="640" /></a></div>
<div>
</div>
<div>
<b><span style="font-size: large;">Deployment Strategy</span></b></div>
<div>
</div>
<div>
The deployment strategy is straight forward. We need to bring up Black Widow and at least one Spawn node. The Spawn node should know the URL of Black Widow. From then, the Health Check API will give use the amount of subscribers per node. We can integrate Health Check with AWS API to automatically bring up new Spawn node if existing nodes are overloaded. The Spawn node image will need to have Spawn application running as service. Similarly, when the nodes are not utilized, we can bring down redundant Spawn nodes.</div>
<div>
</div>
<div>
Black Widow need special treatment due to its importance. If Black Widow fails, we can restart the application. This will cause all existing jobs on Spawn nodes to become orphan and all the Spawn nodes go to detached mode. Slowly, all the nodes will clean up itself and try to register again. Under default configuration, the whole restarting process will happen within 15 minutes.</div>
<div>
</div>
<div>
<b><span style="font-size: large;">Threats and possible improvement</span></b></div>
<div>
</div>
<div>
When choosing centralized architecture, we know that Black Widow is the biggest risk to the system. While Spawn node failure only causes a minor interruption in the affected subscribers, Black Widow failure finally lead to Spawn nodes restart, which will take much longer time to recover.&nbsp;</div>
<div>
</div>
<div>
Moreover, even the system can recover from partial, there still be interruption of service in recovery process. Therefore, if the polling requests failed too often due to unstable infrastructure, the operation will be greatly hampered.&nbsp;</div>
<div>
</div>
<div>
Scalability is another concern for centralized architecture. We have not had a concrete amount of maximum Spawn nodes that the Black Widow can handle. Theoretically, this should be very high because Black Widow only do minor processing, most of its effort are on sending out HTTP requests. It is possible that network is the main limit factor for this architecture. Because of this, we let the Black Widow polling the nodes rather than the nodes polling Black Widow (other people do this, like Hadoop). With this approach, Black Widow may work at its own pace, not under pressure of Spawn nodes.</div>
<div>
</div>
<div>
One of the first question we got is whether it is a Map Reduce problem and the answer is No. Each subscriber in our Distributed Crawling System processes its own comments and does not reporting result back to Black Widow. That why we do not use any Map Reduce product like <i>Hadoop</i>. Our monitor is business logic aware rather than purely infrastructure monitoring, that why we choose to build ourselves over using monitoring tools like <i>Zoo Keeper</i> or <i>AKKA</i>.&nbsp;</div>
<div>
</div>
<div>
For future improvement, it is better to walk away from Centralized Architecture by having multiple hubs collaborating with each other. This should not be too difficult provided that the only time Black Widow accessing database is loading subscriber. Therefore, we can slice the data and let each Black Widow load a portion of it.&nbsp;</div>
<div>
</div>
<div>
Another point that make me feel pretty unsatisfied is the checking of global counter for user limit. As the check happened on every comment crawled, this greatly increases internal network traffic and limit the scalability of system. The better strategy should be divide of quota based on processing speed. Black Widow can regulate and redistribute quota for each subscriber (on different nodes).</div>
http://www.java.net/blog/sgdev-blog/archive/2014/08/28/distributed-crawling#commentsBloggingBlogsJ2EEJava CommunicationsJava EnterpriseWeb ApplicationsFri, 29 Aug 2014 02:28:18 +0000sgdev-blog930454 at http://www.java.netOn The Goodness Of Tiny Moduleshttp://www.java.net/blog/timboudreau/archive/2014/08/24/goodness-tiny-modules
<!-- | 0 --><p>Why you should write <a href="http://j.mp/smalllib">small libraries that do one thing well</a>, over on my real blog at timboudreau.com</p>
<p/>
A response to Eran Hammer's <a href="http://hueniverse.com/2014/05/30/the-fallacy-of-tiny-modules/">The Fallacy of Tiny Modules</a>.</p>
BlogsJ2EEJ2MEJ2SEJava DesktopJava EnterpriseJava PatternsJava ToolsJava User GroupsJavaOneNetBeansNetBeansOpen SourcePatternsWeb Development ToolsSun, 24 Aug 2014 15:04:37 +0000timboudreau930396 at http://www.java.netIntroduces Redis in Java with redis-collectionshttp://www.java.net/blog/otaviojava/archive/2014/08/01/introduces-redis-java-redis-collections
<!-- | 0 --><p>Redis is a NOSQL database written in C. The Remote Dictionary Server is a key-value data base whose the storage is in memory, then the write and read will fastest way, but which difference between Redis and Cache? What does happen when the database fall down? Will we lost the all informations?<br />
The main goal of this article is talk about the Redis and show an open source project, the redis-collection.</p>
<p> Analyzing cache tools such memcache and infinispan, also has this behavior with key-value, the first question that a Java developers does is: Which difference about Redis and cache?. To start, the serialization of the values, while the caches write in binary way, with Kryo or java.io.Serializable, in Redis everything is text or String. The problem in binary serialization is the impossibility of change the file manually or using another language. When Kryo or java.io.Serializable are used if change the object structure, you cannot retrieve the information with this structure, however write and read is faster than using text. There is a reason to it, caches are made to be faster but also temporary. Redis is a database and can storage this information, doing backup in hard driver, this way, if Redis fall down it can retrieves without lost all informations. Another dissemblance, in Redis there are another structures inside:</p>
<p><br/></p>
<li>String key and value</li>
<li>List: are simply lists of strings, sorted by insertion order. It is possible to add elements to a Redis (similar in Java with java.util.List).</li>
<li>Set: are an unordered collection of Strings constants with list you cannot put repetitive String (similar in Java with java.util.Set).</li>
<li>Sorted Set: are, similarly to Redis Sets, non repeating collections of Strings. The difference is that every member of a Sorted Set is associated with score,</li>
<li>Hashes: are maps between string fields and string values, so they are the perfect data type to represent objects (simillar in Java with Map<String, String>).</li>
<p><br/></p>
<p>Beyond there are more two, Bit arrays and HyperLogs, but will not mentioned in this article.</p>
<p>After explain divergences between Redis and cache, next step will install the Redis that is very easy:</p>
<p><br/></p>
<ol>
<li>Do the Download here: <a href="http://redis.io/" > http://redis.io/</a></li>
<li> Uncompress and then compile the Redis: -><br />
tar zxvf redis-version.x.tar.gz -><br />
cd redis-version.x -><br />
make
</li>
<li>Last step is run redis: cd src -> ./redis-server</li>
</ol>
<p><br/></p>
<p>Done, Redis is running. To do some test, just execute the native client, just go to REDIS_HOME/src, next, run ./redis-cli.</p>
<p>Feel free to run any commands, to know the commands in redis: <a href="http://redis.io/commands" >http://redis.io/commands</a></p>
<p>After Redis ran and tested. Will used some data structures on Redis, some popular structures in Java: java.util.List, java.util.Set, java.util.Queue e java.util.Map, and more three: key-value as cache, counter and the last one to ranking.
<p>You can see the API here:</p>
<p><br/><br />
<pre class="prettyprint"><code>public interface keyValueRedisStructure<T> {<br /><br /> T get(String key);<br /> <br /> void set(String key, T bean);<br /><br /> List<T> multiplesGet(Iterable<String> keys);<br /><br /> void delete(String key);<br />}<br /><br /><br />public interface CountStructure<T extends Number> {<br /><br /> T get();<br /> <br /> T increment();<br /> <br /> T increment(T count);<br /> <br /> T decrement();<br /> <br /> T decrement(T count);<br /> <br /> void delete();<br /> <br /> void expires(int ttlSeconds);<br /> <br /> void persist();<br />}<br /><br />public interface ListStructure <T> extends Expirable {<br /><br /> List<T> get(String key);<br /> <br /> void delete(String key);<br />}<br /><br />public interface MapStructure <T> extends Expirable{<br /><br /><br /> Map<String, T> get(String key);<br /> <br /> void delete(String key);<br /> <br />}<br /><br /><br />public interface QueueStructure <T> extends Expirable {<br /><br /><br /> Queue<T> get(String key);<br /> <br /> void delete(String key);<br /><br />}<br /><br /><br />public interface RankingStructure<T extends Number> extends Expirable {<br /><br /> ScoresPoint<T> create(String key);<br /><br /> void delete(String key);<br />}</code></pre><br />
<br/><br />
<br/></p>
<p>The idea of Expirable interface is define on key, the lifecycle, or time to expiry, of the objects in seconds, to remove this lifecycle just use the persist method. Remembering the Redis is in memory, but periodically the data are saved and all information are represented with String. In these structures was chose to serialize and deserialize the objects JSON, because is “fly weight”, with Gson, the framework of Google to write and read objects in JSON. Is possible represent the objects in redis in anyways such, xml, fields split with pipe “|”.</p>
<h3>Why java.util?</h3>
<p>The collections inside java.uti, absolutely are popular to Java developers, this way is more easier to use.</p>
<h3>Which difference?</h3>
<p>The implementations will be differentes, in JDK the collections have the informations, the Redis's implementations will lazy, for example, RedisList will use Jedis, the api to communicate with Redis in Java, to do one operations (insert, remove, retrieve, or size of a list) on demand, the information aren't in RedisList, it is a bridge to information. So RedisList equals to another RedisList if both have the same key.</p>
<h3>NameSpace convention:</h3>
<p>For standardization, redis uses the namespace concern, that works as prefix of the key , the format is: namespace:key, for example, to insert an user whose nickname is java should use: users:java.</p>
<h3>The Redis's client:</h3>
<p>The Jedis is a client to Redis with Java, with him it's possible do all operations on the Key, and is easy to use it:</p>
<pre class="prettyprint"><code>Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();<br />//Jedis will find another cluster automatically<br />jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7379));<br />JedisCluster jc = new JedisCluster(jedisClusterNodes);<br />jc.set("foo", "bar");<br />String value = jc.get("foo");</code></pre>
<h3>Building structures:</h3>
<p>The main class to buil structures is the RedisStructureBuilder:</p>
<p><br/><br />
<br/><br />
<pre class="prettyprint"><code>ListStructure<ProductCart> shippingCart = RedisStrutureBuilder.ofList(jedis, ProductCart.class).withNameSpace("list_producs").build(); <br />List<ProductCart> fruitsCarts = shippingCart.get(FRUITS); <br />fruitsCarts.add(banana); <br />ProductCart banana = fruitsCarts.get(0);<br /><br /><br />User otaviojava = new User("otaviojava"); <br />User felipe = new User("ffrancesquini"); <br />SetStructure<User> socialMediaUsers = RedisStrutureBuilder.ofSet(RedisConnection.JEDIS, User.class).withNameSpace("socialMedia").build(); <br />Set<User> users = socialMediaUsers.createSet("twitter"); <br />users.add(otaviojava); <br />users.add(otaviojava); <br />users.add(felipe); <br />users.add(otaviojava); <br />users.add(felipe);<br />//Just one otaviojava's object and felilpe<br /><br /><br />Species mammals = new Species("lion", "cow", "dog"); <br />Species fishes = new Species("redfish", "glassfish"); <br />Species amphibians = new Species("crododile", "frog"); <br /><br />MapStructure<Species> zoo = = RedisStrutureBuilder.ofMap(RedisConnection.JEDIS, Species.class).withNameSpace("animalZoo").build(); <br /><br />Map<String, Species> vertebrates = zoo.get("vertebrates"); <br />vertebrates.put("mammals", mammals); <br />vertebrates.put("mammals", mammals); <br />vertebrates.put("fishes", fishes); <br />vertebrates.put("amphibians", amphibians);<br /><br /><br /><br /><br />QueueStructure<LineBank> serviceBank = RedisStrutureBuilder.ofQueue(RedisConnection.JEDIS, LineBank.class).withNameSpace("serviceBank").build(); <br />QueueStructure<LineBank> serviceBank = RedisStrutureBuilder.ofQueue(RedisConnection.JEDIS, LineBank.class).withNameSpace("serviceBank").build(); <br /><br />Queue<LineBank> lineBank = serviceBank.get("createAccount"); <br />lineBank.add(new LineBank("Otavio", 25)); <br />LineBank otavio = lineBank.poll();</code></pre><br />
<br/><br />
<br/></p>
<p>This article talked about Redis, with redis-collection, that uses popular Java structures, also talked differences between cache and Redis. Is important understand when you should use cache and use Redis, this can be the abyss the success and headache.</p>
<h3>Links</h3>
<li><b>Source:</b> <a target="_blank" href="https://github.com/otaviojava/redis-collections" >https://github.com/otaviojava/redis-collections</a></li>
<li><b>Reds:</b> <a target="_blank" href="http://redis.io/" >http://redis.io/</a></li>
<li><b>Redis commands:</b> <a target="_blank" href="http://redis.io/commands" >http://redis.io/commands</a></li>
<li><b>Redis documentation:</b> <a target="_blank" href="http://redis.io/documentation" >http://redis.io/documentation</a></li>
<li><b>Jeds:</b> <a target="_blank" href="https://github.com/xetorthio/jedis" >https://github.com/xetorthio/jedis</a></li>
http://www.java.net/blog/otaviojava/archive/2014/08/01/introduces-redis-java-redis-collections#commentsBlogsDatabasesJ2EEJ2SEJava EnterpriseJava ToolsPatternsPerformanceProgrammingResearchSat, 02 Aug 2014 05:45:12 +0000otaviojava904164 at http://www.java.netFrom framework to platformhttp://www.java.net/blog/sgdev-blog/archive/2014/07/14/framework-platform
<!-- | 0 --><p>When I started my career as a Java developer close to 10 years ago, the industry is going through a revolutionary change. Spring framework, which was released in 2003, was quickly gaining ground and became a serious challenger to the bulky J2EE platform. Having gone through the transition time, I quickly found myself in favour of Spring framework instead of J2EE platform, even the earlier versions of Spring are very tedious to declare beans.</p>
<p>What happened next is the revamping of J2EE standard, which was later renamed to JEE. Still, dominating of this era is the use of opensource framework over the platform proposed by Sun. This practice gives developers full control over the technologies they used but inflating the deployment size. Slowly, when cloud application become the norm for modern applications, I observed the trend of moving the infrastructure service from framework to platform again. However, this time, it is not motivated by Cloud application.</p>
<p><b><span style="font-size: large;">Framework vs Platform</p>
<p></span></b><br />
I have never heard of or had to used any framework in school. However, after joining the industry, it is tough to build scalable and configurable software without the help of any framework.</p>
<p>From my understanding, any application is consist of codes that implement business logic and some other codes that are helpers, utilities or to setup infrastructure. The codes that are not related to business logic, being used repetitively in many projects, can be generalised and extracted for reuse. &nbsp;The output of this extraction process is framework.</p>
<p>To make it shorter, framework is any codes that is not related to business logic but helps to dress common concerns in applications and fit to be reused.</p>
<p>If following this definition then MVC, Dependency Injection, Caching, JDBC Template, ORM are all consider frameworks.</p>
<p>Platform is similar to framework as it also helps to dress common concerns in applications but in contrast to framework, the service is provided outside the application. Therefore, a common service endpoint can serve multiple applications at the same time. The services provided by JEE application server or Amazon Web Services are sample of platforms.</p>
<p>Compare the two approaches, platform is more scalable, easier to use than framework but it also offers less control. Because of these advantage, platform seem to be the better approach to use when we build <a href="http://sgdev-blog.blogspot.sg/2014/05/how-to-build-java-based-cloud.html">Cloud Application</a>.</p>
<p><span style="font-size: large;"><b>When should we use platform over framework</b></span></p>
<p>Moving toward platform does not guarantee that developers will get rid of framework. Rather, platform only complements framework in building applications. However, one some special occasions we have a choice to use platform or framework to achieve final goal. &nbsp;From my personal opinion, platform is greater that framework when following conditions are matched:</p>
<ul>
<li>Framework is tedious to use and maintain</li>
<li>The service has some common information to be shared among instances.</li>
<li>Can utilize additional hardware to improve performance.</li>
</ul>
<div>
In office, we still uses Spring framework, Play framework or RoR in our applications and this will not change any time soon. However, to move to Cloud era, we migrated some of our existing products from internal hosting to Amazon EC2 servers. In order to make the best use of Amazon infrastructure and improve software quality, we have done some major refactoring to our current software architecture.&nbsp;</div>
<div>
</div>
<div>
Here are some platforms that we are integrating our product to:</div>
<div>
</div>
<div>
<b>Amazon Simple Storage Service (Amazon S3) &amp; &nbsp;<a href="http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html">Amazon Cloud Front</a></b></div>
<p>
We found that Amazon Cloud Front is pretty useful to boost average response time for our applications. Previously, we host most of the applications in our internal server farms, which located in UK and US. This lead to noticeable increase in response time for customers in other continents. Fortunately, Amazon has much greater infrastructure with server farms built all around the worlds. That helps to guarantee a constant delivery time for package, no matter customer locations.</p>
<p>Currently, due to manual effort to setup new instance for applications, we feel that the best use for Amazon Cloud Front is with static contents, which we host separately from application in Amazon S3. This practice give us double benefit in performance with more consistent delivery time offered by the <a href="http://en.wikipedia.org/wiki/Content_delivery_network">CDN</a>&nbsp;plus the <a href="http://sgdev-blog.blogspot.sg/2014/01/maximum-concurrent-connection-to-same.html">separate connection count in browser for the static content</a>.</p>
<p><b>Amazon Elastic Cache</b></p>
<p>Caching has never been easy on cluster environment. The word "cluster" means that your object will not be stored and retrieve from system memory. Rather, it was sent and retrieved over the network. This task was quite tricky in the past because developers need to sync the records from one node to another node. Unfortunately, not all caching framework support this feature automatically. Our best framework for distributed caching was <a href="http://www.infoq.com/articles/open-terracotta-intro">Terracotta</a>. </p>
<p>Now, we turned to Amazon Elastic Cache because it is cheap, reliable and save us the huge effort for setting up and maintain distributed cache. It is worth to highlight that distributed caching is never mean to replace local cache. The difference in performance suggest that we should only use distributed caching over local caching when user need to access real-time temporary data.</p>
<p><b>Event Logging for Data Analytics</b></p>
<p>In the past, we used Google Analytics for analysing user behaviour but later decided to build internal data warehouse. One of the motivation is the ability to track events from both browsers and servers. The Event Tracking system uses MongoDB as the database as it allow us to quickly store huge amount of events.</p>
<p>To simplify the&nbsp;creation and retrieval of events, we choose JSON as the format for events. We cannot simply send this event directly to event tracking server due to browser prevention of cross-domain attack. For this reason, Google Analytic send the events to server under the form of a GET request for static resource. As we have the full control over how the application was built, we choose to let the events send back to application server first and route to event tracking server later. This approach is much more convenient and powerful.</p>
<p><b>Knowledge Portal</b></p>
<p>In the past, applications access data from database or internal file repository. However, to be able to scale better, we gathered all knowledge to build a knowledge portal. We also built query language to retrieve knowledge from this portal. This approach add one additional layer to the knowledge retrieval process but fortunately for us, our system does not need to serve real time data. Therefore, we can utilize caching to improve performance.</p>
<p><b><span style="font-size: large;">Conclusion</span></b></p>
<p>Above is some of our experience on transforming software architecture when moving to the Cloud. Please share with us your experience and opinion.</p>
http://www.java.net/blog/sgdev-blog/archive/2014/07/14/framework-platform#commentsBloggingBlogsJ2EEJava CommunicationsJava EnterprisePerformanceWeb ApplicationsMon, 14 Jul 2014 19:45:20 +0000sgdev-blog903992 at http://www.java.netCommon mistakes when using Spring MVChttp://www.java.net/blog/sgdev-blog/archive/2014/07/05/common-mistakes-when-using-spring-mvc
<!-- 298 | 94 --><script src="https://google-code-prettify.googlecode.com/svn/loader/run_prettify.js"></script><p><a href="http://3.bp.blogspot.com/-oa8WgFgG3vA/U7erMrLo4EI/AAAAAAAABVs/kNjJOkrQKEQ/s1600/spring_framework.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://3.bp.blogspot.com/-oa8WgFgG3vA/U7erMrLo4EI/AAAAAAAABVs/kNjJOkrQKEQ/s1600/spring_framework.png" style="cursor: move;" /></a></p>
<p>When I started my career around 10 years ago, Struts MVC is the norm in the market. However, over the years, I observed the Spring MVC slowly gaining popularity. This is not a surprise to me, given the seamless integration of Spring MVC with Spring container and the flexibility and extensibility that it offers.</p>
<p>From my journey with Spring so far, I usually saw people making some common mistakes when configuring Spring framework. This happened more often compare to the time people still used Struts framework. I guess it is the trade off between flexibility and usability. Plus, Spring documentation is full of samples but lack of explanation. To help filling up this gap, this article will try to elaborate and explain 3 common issues that I often see people encounter.</p>
<p><b><span style="font-size: large;">Declare beans in Servlet context definition file</span></b></p>
<p>So, everyone of us know that Spring use <a href="http://docs.spring.io/spring/docs/3.0.x/api/org/springframework/web/context/ContextLoaderListener.html"><i>ContextLoaderListener</i> </a>to load Spring application context. Still, when declaring the <i><a href="http://docs.spring.io/spring/docs/4.0.0.RELEASE/javadoc-api/org/springframework/web/servlet/DispatcherServlet.html">DispatcherServlet</a>,</i> we need to create the servlet context definition file with the name "${servlet.name}-context.xml". Ever wonder why?</p>
<p><b>Application Context Hierarchy</b></p>
<p>Not all developers know that Spring application context has hierarchy. Let look at this method</p>
<p><i>org.springframework.context.ApplicationContext.getParent()</i></p>
<p>It tells us that Spring Application Context has parent. So, what is this parent for?</p>
<p>If you download the source code and do a quick references search, you should find that Spring Application Context treat parent as its extension. If you do not mind to read code, let I show you one example of the usage in method <i>BeanFactoryUtils.beansOfTypeIncludingAncestors()</i>:</p>
<pre class="prettyprint"><code>if (lbf instanceof HierarchicalBeanFactory) {<br /> HierarchicalBeanFactory hbf = (HierarchicalBeanFactory) lbf;<br /> if (hbf.getParentBeanFactory() instanceof ListableBeanFactory) {<br /> Map<string t=""> parentResult = <br /> beansOfTypeIncludingAncestors((ListableBeanFactory) hbf.getParentBeanFactory(), type);<br /> ...<br /> }<br />}<br />return result;<br />}</string></code></pre><br />
<br />
If you go through the whole method, you will find that Spring Application Context scan to find beans in internal context before searching parent context. With this strategy, effectively, Spring Application Context will do a reverse breadth first search to look for beans.</p>
<p><b><i>ContextLoaderListener</i></b></p>
<p>This is a well known class that every developers should know. It helps to load the Spring application context from a pre-defined context definition file. As it implements <i><a href="http://docs.oracle.com/javaee/6/api/javax/servlet/ServletContextListener.html">ServletContextListener</a>, </i>the Spring application context will be loaded as soon as the web application is loaded. This bring indisputable benefit when loading the Spring container that contain beans with <i>@PostContruct</i> annotation or batch jobs.</p>
<p>In contrast, any bean define in the servlet context definition file will not be constructed until the servlet is initialized. When does the servlet be initialized? It is indeterministic. In worst case, you may need to wait until users make the first hit to the servlet mapping URL to get the spring context loaded.</p>
<p>With the above information, where should you declare all your precious beans? I feel the best place to do so is the context definition file loaded by <i>ContextLoaderListener </i>and no where else. The trick here is the storage of ApplicationContext as a servlet attribute under the key</p>
<p><i>org.springframework.web.context.WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE </i><br />
<i><br /></i><br />
Later, <i>DispatcherServlet </i>will<i> </i>load this context from <i>ServletContext </i>and assign it as the parent application context.</p>
<pre class="prettyprint"><code>protected WebApplicationContext initWebApplicationContext() {<br /> WebApplicationContext rootContext =<br /> WebApplicationContextUtils.getWebApplicationContext(getServletContext());<br /> ...<br />}</code></pre><br />
<br />
Because of this behaviour, it is highly recommended to create an empty servlet application context definition file and define your beans in the parent context. This will help to avoid duplicating the bean creation when web application is loaded and guarantee that batch jobs are executed immediately.</p>
<p>Theoretically, defining the bean in servlet application context definition file make the bean unique and visible to that servlet only. However, in my 8 years of using Spring, I hardly found any use for this feature except defining Web Service end point.</p>
<p><b><span style="font-size: large;">Declare <i>Log4jConfigListener </i>after <i>ContextLoaderListener</i></span></b></p>
<p>This is a minor bug but it catch you when you do not pay attention to it. <i>Log4jConfigListener </i>is my preferred solution over <i>-Dlog4j.configuration </i>as we can control the log4j loading without altering server bootstrap process. </p>
<p>Obviously, this should be the first listener to be declared in your web.xml. Otherwise, all of your effort to declare proper logging configuration will be wasted.</p>
<p><b><span style="font-size: large;">Duplicated Beans due to mismanagement of bean exploration</span></b></p>
<p>In the early day of Spring, developers spent more time typing on xml files than Java classes. For every new bean, we need to declare and wiring the dependencies ourselves, which is clean, neat but very painful. No surprise that later versions of Spring framework evolved toward greater usability. Now a day, developers may only need to declare transaction manager, data source, property source, web service endpoint and leave the rest to component scan and auto-wiring. </p>
<p>I like these new features but this great power need to come with great responsibility; otherwise, thing will be messy quickly. Component Scan and bean declaration in XML files are totally independent. Therefore, it is perfectly possible to have identical beans of the same class in the bean container if the bean are annotated for component scan and declare manually as well. Fortunately, this kind of mistake should only happen with beginners.</p>
<p>The situation get more complicated when we need to integrate some embedded components into the final product. Then we really need a strategy to avoid duplicated bean declaration.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-jbh6Poz83lA/U7i8v-J6hoI/AAAAAAAABV8/dr112C7qOp0/s1600/spring_component.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-jbh6Poz83lA/U7i8v-J6hoI/AAAAAAAABV8/dr112C7qOp0/s1600/spring_component.png" height="263" width="400" /></a></div>
<p>The above diagram show a realistic sample of the kind of problems we face in daily life. Most of the time, a system is composed from multiple components and often, one component serves multiple product. Each application and component has it own beans. In this case, what should be the best way to declare to avoid duplicated bean declaration?</p>
<p>Here is my proposed strategy:</p>
<ul>
<li>Ensure that each component need to start with a dedicated package name. It makes our life easier when we need to do component scan.</li>
<li>Don't dictate the team that develop the component on the approach to declare the bean in the component itself (annotation versus xml declaration). It is the responsibility of the developer whom packs the components to final product to ensure no duplicated bean declaration.</li>
<li>If there is context definition file packed within the component, give it a package rather than in the root of classpath. It is even better to give it a specific name. For example <i>src/main/resources/spring-core/spring-core-context.xml</i> is way better than <i>src/main/resource/application-context.xml.</i> Imagine what can we do if we pack few components that contains the same file <i>application-context.xml</i> on the identical package!</li>
<li>Don't provide any annotation for component scan (<i>@Component</i>, <i>@Service</i> or <i>@Repository</i>) if you already declare the bean in one context file.</li>
<li>Split the environment specific bean like <i>data-source</i>, <i>property-source</i> to a separate file and reuse.</li>
<li>Do not do component scan on the general package. For example, instead of scanning <i>org.springframework</i> package, it is easier to manage if we scan several sub-packages like <i>org.springframework.core</i>, <i>org.springframework.context</i>, <i>org.springframework.ui</i>,...</li>
</ul>
<p><b><span style="font-size: large;">Conclusions</span></b></p>
<p>I hope you found the above tips useful for your daily usage. If there is any doubt or any other ideas, please help to feedback.</p>
http://www.java.net/blog/sgdev-blog/archive/2014/07/05/common-mistakes-when-using-spring-mvc#commentsBloggingBlogsJ2EEJava CommunicationsJava EnterpriseWeb ApplicationsSun, 06 Jul 2014 04:52:35 +0000sgdev-blog903882 at http://www.java.netUsing the Java 8 DateTime Classes with JPA!http://www.java.net/blog/montanajava/archive/2014/06/17/using-java-8-datetime-classes-jpa
<!-- | 54 --><h1>Using the Java 8 Date Time Classes with JPA!</h1>
<p>With the <a href="http://www.oracle.com/technetwork/java/javase/overview/index.html">Java 8 SE</a> release, developers get a splendid new best-in-class Date-Time API. Wouldn't it be nice if you could use it with JPA? Not so fast. JPA and for that matter JDBC know nothing about the new classes, and if you use them in your entities, JPA will map them to BLOBs in your database by default. This happens in DDL or database creation, in queries, and in inserts and updates. Take a simple entity as an example:</p>
<pre class="prettyprint"><code><pre><br />@Entity<br />public class Trip {<br /> @Id long id;<br /> LocalDateTime departure;<br />}</pre></code></pre>
<p>The persistence of the <em>departure</em> attribute will work perfectly, if you like saving DateTimes as BLOBs that is. A BLOB for a datetime is hardly the mapping anyone would want, adversely affecting your indexing and querying options, not to mention being an inefficient choice for storage. If you are taking advantage of either the database or the DDL file generation features of JPA 2.1, the above Entity will also result in a BLOB <em>departure</em> field being created in the <em>Trip</em> table, directly or indirectly. What most people will want is a sensible mapping to the Date, Time, and Timestamp types of SQL. This can be achieved by creating custom converters under JPA 2.1 that use old-school classes from the java.sql package which in turn <em>are</em> supported by JPA and JDBC. Here is an example for <code class="prettyprint">LocalDate</code>:
</p>
<pre class="prettyprint"><code><pre><br />@Converter(autoApply = true)<br />public class LocalDatePersistenceConverter implements<br /> AttributeConverter<java.time.LocalDate, java.sql.Date> {<br /> @Override<br /> public java.sql.Date convertToDatabaseColumn(LocalDate entityValue) {<br /> return java.sql.Date.valueOf(entityValue);<br /> }<br /><br /> @Override<br /> public LocalDate convertToEntityAttribute(java.sql.Date databaseValue) {<br /> return databaseValue.toLocalDate();<br /> }<br />}</pre></code></pre>
<p>Easy as pie. JPA 2.1 gives you a portable way of fixing our not-quite-up-to-snuff JDBC/JPA specification dilemma, simply by providing you with a standard way to create a custom converter that implements <a href="http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html">javax.persistence.AttributeConverter</a>. This interface provides two methods that define the mappings: The first one is from a JDBC supported class when you are reading <em>from</em> the database <a href="http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html#convertToEntityAttribute%28Y%29">convertToEntityAttribute</a>; the other one is used to map to a JDBC supported class when you are persisting <em>to</em> the database <a href="http://docs.oracle.com/javaee/7/api/javax/persistence/AttributeConverter.html#convertToDatabaseColumn%28X%29">convertToDatabaseColumn</a>. The usual suspects for such mappings will be <a href="http://docs.oracle.com/javase/8/docs/api/java/sql/Date.html">java.sql.Date</a>, <a href="http://docs.oracle.com/javase/8/docs/api/java/sql/Time.html">java.sql.Time</a></a>, and <a href="http://docs.oracle.com/javase/8/docs/api/java/sql/Timestamp.html">java.sql.Timestamp</a> and even the old stalwart <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/String.html">java.lang.String</a>. In the case of <a href="http://docs.oracle.com/javase/8/docs/api/java/time/LocalDateTime.html">LocalDateTime</a>, the converter looks like this:</p>
<pre class="prettyprint"><code><pre><br />@Converter(autoApply = true)<br />public class LocalDateTimePersistenceConverter implements<br /> AttributeConverter<LocalDateTime, Timestamp> {<br /> @Override<br /> public java.sql.Timestamp convertToDatabaseColumn(LocalDateTime entityValue) {<br /> return Timestamp.valueOf(entityValue);<br /> }<br /><br /> @Override<br /> public LocalDateTime convertToEntityAttribute(java.sql.Timestamp databaseValue) {<br /> return databaseValue.toLocalDateTime();<br /> }<br />}</pre></code></pre>
<p>By mapping LocalDateTime to java.sql.Timestamp, we get an automatic mapping to the SQL type we wish to have. Notice the <a href="http://docs.oracle.com/javaee/7/api/javax/persistence/Converter.html">@Converter</a> annotation which has an <em>autoapply</em> parameter, here set to true. That means that you can now use LocalDateTime in any of your entities without requiring further annotation. If this is not what you want, set it to false, and then in your entity annotate your attribute like this:</p>
<pre class="prettyprint"><code><pre><br />@Convert(converter = LocalTimePersistenceConverter.class)<br />LocalTime departure;<br /></pre></code></pre>
<p>JDBC has no concept of persisting timezone information with dates and timestamps. For the LocalDate and LocalDateTime classes, that is a good semantic fit. If you do wish to store java.time classes that include timezone information, mapping these to a String will provide for an easy, portable solution. See the following example for <a href="http://docs.oracle.com/javase/8/docs/api/java/time/OffsetDateTime.html">OffsetDateTime</a>:</p>
<pre class="prettyprint"><code><pre><br />@Converter(autoApply = true)<br />public class OffsetDateTimePersistenceConverter implements<br /> AttributeConverter<OffsetDateTime, String> {<br /><br /> /**<br /> * @return a value as a String such as 2014-12-03T10:15:30+01:00<br /> * @see OffsetDateTime#toString()<br /> */<br /> @Override<br /> public String convertToDatabaseColumn(OffsetDateTime entityValue) {<br /> return Objects.toString(entityValue, null);<br /> }<br /><br /> @Override<br /> public OffsetDateTime convertToEntityAttribute(String databaseValue) {<br /> return OffsetDateTime.parse(databaseValue);<br /> }<br />}</pre></code></pre>
<p>It keeps getting better! Java 8 now has extremely handy <a href="http://docs.oracle.com/javase/8/docs/api/java/time/Period.html">Period</a> and <a href="http://docs.oracle.com/javase/8/docs/api/java/time/Duration.html">Duration</a> classes (anyone out there ever have to deal with schedules or timetables?), as well as sundry other classes, for which you may find the need persist instances. These too can be easily persisted as Strings with simple converters. Here is a converter for Period using existing JDBC String mapping to do the dirty work:</p>
<pre class="prettyprint"><code><pre><br />@Converter(autoApply = true)<br />public class PeriodPersistenceConverter implements AttributeConverter<Period, String> {<br /><br /> /**<br /> * @return an ISO-8601 representation of this duration.<br /> * @see Period#toString()<br /> */<br /> @Override<br /> public String convertToDatabaseColumn(Period entityValue) {<br /> return Objects.toString(entityValue, null);<br /> }<br /><br /> @Override<br /> public Period convertToEntityAttribute(String databaseValue) {<br /> return Period.parse(databaseValue);<br /> }<br />}<br /></pre></code></pre>
<p>The JPA persistence unit needs to know about the converters. This can be done by adding the converters to the persistence.xml. An excerpt from the persistence.xml for the unit tests of the converters looks like this:</p>
<pre class="prettyprint"><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;<br />&lt;persistence version=&quot;1.0&quot;<br /> xmlns=&quot;http://java.sun.com/xml/ns/persistence&quot;&gt;<br />&lt;persistence-unit name=&quot;java8DateTimeTestPersistenceUnit&quot;<br /> transaction-type=&quot;RESOURCE_LOCAL&quot;&gt;<br /> &lt;provider&gt;org.eclipse.persistence.jpa.PersistenceProvider&lt;/provider&gt;<br /> &lt;class&gt;org.parola.domain.Book&lt;/class&gt;<br /> &lt;class&gt;org.parola.domain.Appointment&lt;/class&gt;<br /> &lt;class&gt;org.parola.domain.SpaceTravel&lt;/class&gt;<br /> ...<br /> &lt;class&gt;org.parola.util.date.LocalDatePersistenceConverter&lt;/class&gt;<br /> &lt;class&gt;org.parola.util.date.LocalTimePersistenceConverter&lt;/class&gt;<br /> &lt;class&gt;org.parola.util.date.LocalDateTimePersistenceConverter&lt;/class&gt;<br /> ... <br /> &lt;properties&gt;<br /> &lt;property name=&quot;eclipselink.logging.level&quot; value=&quot;INFO&quot; /&gt;<br /> &lt;property name=&quot;eclipselink.target-database&quot; value=&quot;DERBY&quot; /&gt;<br /> &lt;property name=&quot;javax.persistence.jdbc.driver&quot; <br /> value=&quot;org.apache.derby.jdbc.EmbeddedDriver&quot; /&gt;<br /> &lt;property name=&quot;javax.persistence.jdbc.url&quot; <br /> value=&quot;jdbc:derby:memory:java8DateTimeTest;create=true&quot; /&gt;<br /> &lt;property name=&quot;javax.persistence.jdbc.user&quot; value=&quot;&quot; /&gt;<br /> &lt;property name=&quot;javax.persistence.jdbc.password&quot; value=&quot;&quot; /&gt;<br /> &lt;property name=&quot;javax.persistence.schema-generation.database.action&quot; <br /> value=&quot;drop-and-create&quot;/&gt;<br /> &lt;/properties&gt;<br />&lt;/persistence-unit&gt;<br />&lt;/persistence&gt;</code></pre>
<p>Some databases have custom Java types for storing timezone data along with Date/Time information, <a href="https://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.db2z10.doc.intro%2Fsrc%2Ftpc%2Fdb2z_datetimetimestamp.htm">DB2 as of vs. 10</a> and <a href="http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements001.htm#SQLRF50951">Oracle</a> being two examples that provide that support for zoned timestamps. If you have such a database, you can still make use of a custom JPA 2.1 converter and stay in the world of Java 8 datetime goodness and not have to use non-standard vendor-provided classes in your code. </p>
<p>For other databases, or in the event you do not wish to use the proprietary feature of your database, an attribute of type <a href="http://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html">ZonedDateTime</a>, using the appropriate converter, will be stored in your database in a human-readable form, and will be sortable in accordance with <a href="http://www.iso.org/iso/home/standards/iso8601.htm">ISO 8601</a> norms, albeit not amenable to Date/Time math within SQL, although your databases should have built-in from-String converter functions. You can establish constraints in the database if you want further control over your data integrity, of potential importance for databases additionally serving non-Java 8 clients. Database functions come to mind as a solution if you need native types in the SQL world, while retaining <code class="prettyprint">VARCHAR</code> storage. Any deeper mappings, such as to multiple columns separating out the timezone information, exceed what is foreseen in the current specification for converters.</p>
<h2>Resources</h2>
<p>The project itself is located at <a href="https://bitbucket.org/montanajava/jpaattributeconverters">JPAAttributeConverters</a> on BitBucket with a commercially-friendly license. You will need Maven to build the project. The project has extensive unit tests and to that end includes several highly minimalistic domain classes. The tests use an in-memory Derby database and a JPA 2.1-compliant driver. Be sure to test against your database, easily done by modifying the persistence.xml. You can check out the code using Mercurial: hg clone <a href="https://montanajava@bitbucket.org/montanajava/jpaattributeconverters" title="https://montanajava@bitbucket.org/montanajava/jpaattributeconverters">https://montanajava@bitbucket.org/montanajava/jpaattributeconverters</a></p>
<p>Have fun with the new Java 8 Date and Time classes! We <del>say good riddance</del> bid a fond farewell to the venerable java.util and java.sql Date/Time classes. As soon as you can use Java 8 in your environment, there is no reason to wait for updates to the JPA and JDBC specifications to be able to beneifit from these classes. If support does become available within a future standard, modification to your code should consist of removing some lines from the persistence.xml and deleting some classes.</p>
http://www.java.net/blog/montanajava/archive/2014/06/17/using-java-8-datetime-classes-jpa#commentsBlogsDatabasesEJBGlassFishGlassFishJ2EEJava EnterpriseProgrammingTue, 17 Jun 2014 12:49:28 +0000montanajava903120 at http://www.java.netWildFly Cluster on Raspberry Pi (Tech Tip #28)http://www.java.net/blog/arungupta/archive/2014/05/30/wildfly-cluster-raspberry-pi-tech-tip-28
<!-- 1393 | 0 --><img src="http://www.java.net/sites/default/files/arun-photo3_0.png" border="0", align="left" /><p><a href="http://blog.arungupta.me/2014/05/wildfly-on-raspberry-pi-techtip-24/">Tech Tip #25</a> showed how to configure WildFly on Raspberry Pi. <a href=" http://blog.arungupta.me/2014/05/wildfly-managed-domain-raspberrypi-techtip27">Tech Tip #27</a> showed how to setup WildFly on two Raspberry Pis in managed domain mode. This tech tip will show how to setup a WildFly cluster over those two hosts.</p>
<p>WildFly supports <a href="http://mod-cluster.jboss.org/">mod_cluster</a> out of the box. There are <a href="https://access.redhat.com/site/solutions/101793">several advantages</a> of mod_cluster:</p>
<ul>
<li>Dynamic configuration of httpd workers</li>
<li>Server-side load balance factor calculation</li>
<li>Fine-grained web app lifecycle control</li>
</ul>
<p>However there is <a href="http://mod-cluster.jboss.org/downloads/1-2-6-Final-bin">no ARM build</a> available for it, yet. So we'll use <a href="http://httpd.apache.org/docs/2.2/mod/mod_proxy.html">mod_proxy</a> instead that gets pre-installed as part of Apache2. The Domain Controller and HTTP server need not be on the same host and that's the configuration we'll use for our setup. So effectively, there will be three Raspberry Pis:</p>
<ul>
<li>Domain Controller</li>
<li>Host Controller</li>
<li>Web Server</li>
</ul>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-cluster-techtip28.png"><img class="alignnone size-large wp-image-11595" alt="raspi-cluster-techtip28" src="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-cluster-techtip28-1024x678.png" width="640" height="423" /></a></p>
<p>Lets get started!</p>
<ol>
<li>Before installing any modules, including Apache, on Raspbian, the system needs to be fully updated:<br />
<pre class="prettyprint"><code>pi@raspberrypi~ $ <strong>sudo apt-get update &amp;&amp; sudo apt-get upgrade</strong><br />Get:1 <a href="http://reflection.oss.ou.edu" title="http://reflection.oss.ou.edu">http://reflection.oss.ou.edu</a> wheezy Release.gpg [490 B]<br />Hit <a href="http://repository.wolfram.com" title="http://repository.wolfram.com">http://repository.wolfram.com</a> stable Release.gpg <br />Hit <a href="http://repository.wolfram.com" title="http://repository.wolfram.com">http://repository.wolfram.com</a> stable Release <br />Get:2 <a href="http://reflection.oss.ou.edu" title="http://reflection.oss.ou.edu">http://reflection.oss.ou.edu</a> wheezy Release [14.4 kB] <br />Get:3 <a href="http://raspberrypi.collabora.com" title="http://raspberrypi.collabora.com">http://raspberrypi.collabora.com</a> wheezy Release.gpg [836 B] <br />Get:4 <a href="http://archive.raspberrypi.org" title="http://archive.raspberrypi.org">http://archive.raspberrypi.org</a> wheezy Release.gpg [490 B] <br /><br />. . .<br /><br />Setting up udisks (1.0.4-7wheezy1) ...<br />Setting up rpi-update (20140321) ...<br />Setting up ssh (1:6.0p1-4+deb7u1) ...<br />Setting up perl-modules (5.14.2-21+rpi2+deb7u1) ...<br />Setting up perl (5.14.2-21+rpi2+deb7u1) ...<br />Setting up libdpkg-perl (1.16.14+rpi1) ...<br />Setting up dpkg-dev (1.16.14+rpi1) ...<br />Processing triggers for menu ...</code></pre><br />
Trying to install Apache without updating the system will give weird errors:<br />
<pre class="prettyprint"><code>Err <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main apache2.2-bin armhf 2.2.22-13<br />404 Not Found<br />Err <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main apache2-utils armhf 2.2.22-13<br />404 Not Found<br />Err <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main apache2.2-common armhf 2.2.22-13<br />404 Not Found<br />Err <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main apache2-mpm-worker armhf 2.2.22-13<br />404 Not Found<br />Err <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main apache2 armhf 2.2.22-13<br />404 Not Found</code></pre><br />
Now install Apache HTTP as:<br />
<pre class="prettyprint"><code>pi@raspberrypi /~ $ <strong>sudo apt-get install apache2</strong><br />Reading package lists... Done<br />Building dependency tree<br />Reading state information... Done<br />The following extra packages will be installed:<br />apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert<br />Suggested packages:<br />apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist<br />The following NEW packages will be installed:<br />apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert<br />0 upgraded, 10 newly installed, 0 to remove and 1 not upgraded.<br />Need to get 1,352 kB of archives.<br />After this operation, 4,916 kB of additional disk space will be used.<br />Do you want to continue [Y/n]? <strong>y<br /></strong>Get:1 <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main libapr1 armhf 1.4.6-3+deb7u1 [90.9 kB]<br />Get:2 <a href="http://mirrordirector.raspbian.org/raspbian/" title="http://mirrordirector.raspbian.org/raspbian/">http://mirrordirector.raspbian.org/raspbian/</a> wheezy/main libaprutil1 armhf 1.4.1-3 [77.1 kB]<br /><br />. . .<br /><br />Setting up apache2-mpm-worker (2.2.22-13+deb7u1) ...<br />[....] Starting web server: apache2apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName<br />. ok<br />Setting up apache2 (2.2.22-13+deb7u1) ...<br />Setting up ssl-cert (1.0.32) ...</code></pre><br />
The log messages show that server's name could not be determined, and 127.0.1.1 is instead used for ServerName. This can be fixed by editing "/etc/apache2/apache2.conf" and adding the following line:<br />
<code class="prettyprint">ServerName localhost</code>
</li>
<li>After Apache is installed, "mod_proxy" modules already exists in the "/usr/lib/apache2/modules" directory, and just need to enabled.Create "/etc/apache2/mods-enabled/mod_proxy.load" as:<br />
<pre class="prettyprint"><code>LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so<br />LoadModule proxy_balancer_module /usr/lib/apache2/modules/mod_proxy_balancer.so<br />LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so<br />LoadModule headers_module /usr/lib/apache2/modules/mod_headers.so</code></pre><br />
This file will be picked up via a direcinve in "/etc/apache2/apache2.conf".</p>
<p>Restart the server:<br />
<code class="prettyprint">sudo service apache2 restart</code><br />
If there are any errors, then you can see them in "/var/log/apache2/error.log" directory.</p>
<p>The list of modules loaded can be seen using:<br />
<pre class="prettyprint"><code>~ <strong>/apachectl -t -D DUMP_MODULES</strong><br />Loaded Modules:<br />core_module (static)<br />log_config_module (static)<br />logio_module (static)<br />version_module (static)<br />mpm_worker_module (static)<br />http_module (static)<br />so_module (static)<br />alias_module (shared)<br />auth_basic_module (shared)<br />authn_file_module (shared)<br />authz_default_module (shared)<br />authz_groupfile_module (shared)<br />authz_host_module (shared)<br />authz_user_module (shared)<br />autoindex_module (shared)<br />cgid_module (shared)<br />deflate_module (shared)<br />dir_module (shared)<br />env_module (shared)<br />mime_module (shared)<br /><strong>proxy_module (shared)</strong><br /><strong>proxy_balancer_module (shared)</strong><br /><strong>proxy_http_module (shared)<br />headers_module (shared)</strong><br />negotiation_module (shared)<br />reqtimeout_module (shared)<br />setenvif_module (shared)<br />status_module (shared)<br />Syntax OK</code></pre><br />
The newly loaded modules are highlighted in bold.</li>
<li>Provide convenient host names for each of the Raspberry Pis. The names chosen for each host is shown in the table below:<br />
<table>
<tbody>
<tr>
<th>IP Address</th>
<th>Role</th>
<th>Host Name</th>
</tr>
<tr>
<td>10.0.0.27</td>
<td>Domain Controller</td>
<td>raspi-master</td>
</tr>
<tr>
<td>10.0.0.28</td>
<td>Host Controller</td>
<td>raspi-slave</td>
</tr>
<tr>
<td>10.0.0.29</td>
<td>Web Server</td>
<td>raspi-apache</td>
</tr>
</tbody>
</table>
<p>This is enabled by editing "/etc/hostname" on each Raspberry Pi and changing "raspberrypi" to the given name.</p>
<p>In addition, "/etc/hosts" on each Raspberry Pi need to make two entries of the following format:<br />
<pre class="prettyprint"><code>127.0.0.1 &lt;Host Name&gt;<br />&lt;IP Address&gt; &lt;Host Name&gt;</code></pre><br />
Here <IP Address> and <Host Name> for each host needs to be used from the table above.</p>
<p>Finally, added the following entries on "/etc/hosts" on my local Mac:<br />
<pre class="prettyprint"><code>10.0.0.27 raspi-master.example.com<br />10.0.0.28 raspi-slave.example.com<br />10.0.0.29 raspi-apache.example.com</code></pre><br />
This ensures that any cookies are set from the same domain.</p>
<p>Flush the DNS using:<br />
<code class="prettyprint">sudo dscacheutil -flushcache</code><br />
Now <a href="http://raspi-master.example.com:8330">raspi-master.example.com:8330</a> in the browser shows:</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-master-defaultoutput-techtip28.png"><img class="alignnone wp-image-11596" alt="raspi-master-defaultoutput-techtip28" src="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-master-defaultoutput-techtip28-1024x904.png" width="384" height="339" /></a></p>
<p>And similarly <a href="http://raspi-slave.example.com:8330">raspi-slave.example.com:8330</a> shows:</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-slave-default-output-techtip28.png"><img class="alignnone wp-image-11597" alt="raspi-slave-default-output-techtip28" src="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-slave-default-output-techtip28-1024x966.png" width="384" height="362" /></a></li>
<li>Configure mod_proxy load balancer by editing "/etc/apache2/apache2.conf" and add the following lines at the end of the file:<br />
<pre class="prettyprint"><code>Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED<br />&lt;Proxy balancer://raspicluster&gt;<br />BalancerMember <a href="http://10.0.0.27:8330<br />BalancerMember" title="http://10.0.0.27:8330<br />BalancerMember">http://10.0.0.27:8330<br />BalancerMember</a> <a href="http://10.0.0.28:8330<br />ProxySet" title="http://10.0.0.28:8330<br />ProxySet">http://10.0.0.28:8330<br />ProxySet</a> stickysession=ROUTEID<br />&lt;/Proxy&gt;<br />ProxyPass / balancer://raspicluster/</code></pre><br />
This directive provide load balancing between "master" and "slave". "Header" and "ProxySet" directive provides sticky session.</li>
</ol>
<p>Now accessing <a href="http://raspi-apache.example.com/http-1.0-SNAPSHOT/index.jsp">raspi-apache.example.com/http-1.0-SNAPSHOT/index.jsp</a> shows:</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-apache-default-output-techtip28.png"><img class="alignnone wp-image-11598" alt="raspi-apache-default-output-techtip28" src="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-apache-default-output-techtip28-1024x808.png" width="384" height="303" /></a></p>
<p>And so in three parts (<a href="http://blog.arungupta.me/2014/05/wildfly-on-raspberry-pi-techtip-24/">part 1</a> and <a href="http://blog.arungupta.me/2014/05/wildfly-managed-domain-raspberrypi-techtip27/">part 2</a>), we learned how to setup a WildFly cluster on Raspberry Pi!</p>
http://www.java.net/blog/arungupta/archive/2014/05/30/wildfly-cluster-raspberry-pi-tech-tip-28#commentsBloggingBlogsCommunityIoTJ2EEJava EnterpriseOpen SourceSat, 31 May 2014 05:51:34 +0000arungupta903452 at http://www.java.netWildFly Managed Domain on Raspberry Pi (Tech Tip #27)http://www.java.net/blog/arungupta/archive/2014/05/30/wildfly-managed-domain-raspberry-pi-tech-tip-27
<!-- 1810 | 0 --><img src="http://www.java.net/sites/default/files/arun-photo3_0.png" border="0", align="left" /><p><a href="http://blog.arungupta.me/2014/05/wildfly-on-raspberry-pi-techtip-24/">Tech Tip #25</a> showed how to configure WildFly on Raspberry Pi. This tech tip will show how to setup a WildFly managed domain over two hosts running on Raspberry Pi.</p>
<p>Lets understand some basic concepts first.</p>
<p>WildFly can run in two modes:</p>
<ul>
<li><strong>Managed Domain</strong> allows you to run and manage a multi-server domain topology</li>
<li><strong>Standalone</strong> allows to run a single instance of server</li>
</ul>
<p>Multiple standalone instances can be configured to form a highly available cluster. It is up to the user to coordinate management across multiple servers though.</p>
<p>Servers running in managed domain mode are referred to as the members of a "domain". A single <strong>Domain Controller</strong> acts as the central management control point for the domain. A domain can span multiple physical or virtual hosts, with all WildFly instances on a given host under the control of a <strong>Host Controller</strong> process. One Host Controller instance is configured to act as the Domain Controller. The Host Controller on each host interacts with the Domain Controller to control the lifecycle of the application server instances running on its host and to assist the Domain Controller in managing them.</p>
<p>It is important to understand that administration of servers (standalone or managed domain) is orthogonal to clustering and high availability. In the managed domain mode, a server instance always belong to a <strong>Server Group</strong>. A domain can have multiple Server Groups. Each group can be configured with different profiles and deployments.</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/wildfly-clustering-architecture-techtip27.png"><img class="alignnone size-full wp-image-11578" alt="wildfly-clustering-architecture-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/wildfly-clustering-architecture-techtip27.png" width="498" height="353" /></a></p>
<p>For example, default WildFly installation comes with "main-server-group" and "other-server-group" as shown:<br />
<pre class="prettyprint"><code>&lt;servers&gt;<br /> &lt;server name="server-one" group="main-server-group" auto-start="false"/&gt;<br /> &lt;server name="server-two" group="main-server-group" auto-start="false"&gt;<br /> &lt;socket-bindings port-offset="150"/&gt;<br /> &lt;/server&gt;<br /> &lt;server name="server-three" group="other-server-group" auto-start="true"&gt;<br /> &lt;socket-bindings port-offset="250"/&gt;<br /> &lt;/server&gt;<br />&lt;/servers&gt;</code></pre><br />
"main-server-group" has two servers: "server-one" and "server-two". "other-server-group" has one server: "server-three".</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/default-server-group-techtip27.png"><img class="alignnone size-full wp-image-11571" alt="default-server-group-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/default-server-group-techtip27.png" width="450" height="192" /></a></p>
<p>By default, these are all configured on a single machine (localhost) with Host Controller and Domain Controller co-located. This is defined in "domain/configuration/host.xml".</p>
<p>Each Server Group is configured with a profile. These profiles are defined, and associated with a Server Group, in "domain/configuration/domain.xml". Default WildFly installation comes with two profiles: "full" and "full-ha".<br />
<pre class="prettyprint"><code>&lt;server-groups&gt;<br /> &lt;server-group name="main-server-group" profile="full"&gt;<br /> . . .<br /> &lt;/server-group&gt;<br /> &lt;server-group name="other-server-group" profile="full-ha"&gt;<br /> . . .<br /> &lt;/server-group&gt;<br />&lt;/server-groups&gt;</code></pre><br />
"main-server-group" is configured with "full" profile and "other-server-group" is configured with "full-ha" profile.</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/server-group-profiles-techtip27.png"><img class="alignnone size-full wp-image-11572" alt="server-group-profiles-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/server-group-profiles-techtip27.png" width="459" height="421" /></a></p>
<p>Profile is a named set of subsystem configurations that adds capabilities like Servlet, EJB, JPA, JTA, etc. The "full-ha" profile also enable all subsystems needed to establish cluster (infinispan, jcluster, and mod_cluster).</p>
<p>OK, enough explanation, lets get into action!</p>
<p>Make sure to install WildFly on each of the Raspberry Pi following <a href="http://blog.arungupta.me/2014/05/wildfly-on-raspberry-pi-techtip-24/">Tech Tip #25</a>.</p>
<p><a href="https://docs.jboss.org/author/display/WFLY8/WildFly+8+Cluster+Howto">docs.jboss.org/author/display/WFLY8/WildFly+8+Cluster+Howto</a> explain in detail on how to setup Domain Controller and Host Controller on two hosts and enable clustering. This tech tip follows these instructions and adapt them for WildFly. This tech tip will show how to configure WildFly managed domain over two Raspberry Pis.</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-cluster-techtip27.png"><img alt="raspi-cluster-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-cluster-techtip27.png" width="535" height="259" /></a></p>
<p>A subsequent blog will show how to configure cluster over this managed domain.</p>
<p>Here is what we'll do:</p>
<ul>
<li>Call one host as "master" and another as "slave".</li>
<li>Both will run WildFly 8.1 CR2, master will run as Domain Controller and slave will run under the domain management of master</li>
<li>Deploy a project into domain, and verify that the application is deployed on both master and slave hosts.</li>
</ul>
<p>Each WildFly was connected to a display to obtain the IP address and enable SSH access. It can be enabled by invoking<br />
<code class="prettyprint">sudo raspi-config</code><br />
Scroll down to the SSH option and enable it. Then the two Raspberry Pis were configured to run in headless mode (no keyboard, mouse, or monitor).</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-wildfly-setup-techtip27.png"><img class="alignnone size-large wp-image-11585" alt="raspi-wildfly-setup-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/raspi-wildfly-setup-techtip27-1024x678.png" width="640" height="423" /></a></p>
<ol>
<li>Domain Configuration - Configure master and slave "host.xml"
<ol>
<li>Master configuration
<ol>
<li>Login to the master Raspberry pi:<br />
<pre class="prettyprint"><code>~&gt; <strong>ssh 10.0.0.27 -l pi</strong><br />pi@10.0.0.27's password: <strong>raspberry</strong><br /><br />Linux raspberrypi 3.10.25+ #622 PREEMPT Fri Jan 3 18:41:00 GMT 2014 armv6l<br /><br />The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright.<br /><br />Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law.<br />Last login: Wed May 28 19:03:19 2014<br />pi@raspberrypi ~ $</code></pre><br />
The default password is "raspberry".</li>
<li>By default, WildFly is configured to run in server VM. As mentioned in <a href="http://blog.arungupta.me/2014/05/wildfly-on-raspberry-pi-techtip-24/">Tech Tip #25</a>, this is not support on JDK bundled on Raspbian. This needs to be updated at two places. Edit "bin/domain.sh":<br />
<pre class="prettyprint"><code>80 # If -server not set in JAVA_OPTS, set it, if supported<br />81 SERVER_SET=`echo $JAVA_OPTS | $GREP "\-server"`<br />82 if [ "x$SERVER_SET" = "x" ]; then<br />83<br />84 # Check for SUN(tm) JVM w/ HotSpot support<br />85 if [ "x$HAS_HOTSPOT" = "x" ]; then<br />86 HAS_HOTSPOT=`"$JAVA" $JVM_OPTVERSION -version 2&gt;&amp;1 | $GREP -i HotSpot`<br />87 fi<br />88<br />89 # Check for OpenJDK JVM w/server support<br />90 if [ "x$HAS_OPENJDK" = "x" ]; then<br />91 HAS_OPENJDK=`"$JAVA" $JVM_OPTVERSION 2&gt;&amp;1 | $GREP -i OpenJDK`<br />92 fi<br />93<br />94 # Check for IBM JVM w/server support<br />95 if [ "x$HAS_IBM" = "x" ]; then<br />96 HAS_IBM=`"$JAVA" $JVM_OPTVERSION 2&gt;&amp;1 | $GREP -i "IBM J9"`<br />97 fi<br />98<br />99 # Enable -server if we have Hotspot or OpenJDK, unless we can't<br />100 if [ "x$HAS_HOTSPOT" != "x" -o "x$HAS_OPENJDK" != "x" -o "x$HAS_IBM" != "x" ]; then<br />101 # MacOS does not support -server flag<br />102 if [ "$darwin" != "true" ]; then<br />103 PROCESS_CONTROLLER_JAVA_OPTS="-server $PROCESS_CONTROLLER_JAVA_OPTS"<br />104 HOST_CONTROLLER_JAVA_OPTS="-server $HOST_CONTROLLER_JAVA_OPTS"<br />105 JVM_OPTVERSION="-server $JVM_OPTVERSION"<br />106 fi<br />107 fi<br />108 else<br />109 JVM_OPTVERSION="-server $JVM_OPTVERSION"<br />110 fi</code></pre><br />
and remove the lines marked 80 through 110.</li>
<li>Edit host.xml<br />
<code class="prettyprint">vi domain/configuration/host.xml</code><br />
Remove "-server" option by removing lines 75 through 77 as shown below:<br />
<pre class="prettyprint"><code>71 &lt;jvms&gt;<br />72 &lt;jvm name="default"&gt;<br />73 &lt;heap size="64m" max-size="256m"/&gt;<br />74 &lt;permgen size="256m" max-size="256m"/&gt;<br /><strong>75 &lt;jvm-options&gt;</strong><br /><strong>76 &lt;option value="-server"/&gt;</strong><br /><strong>77 &lt;/jvm-options&gt;</strong><br />78 &lt;/jvm&gt;<br />79 &lt;/jvms&gt;</code></pre><br />
</li>
<li>The default setting for <interfaces> in this file looks like:<br />
<pre class="prettyprint"><code>&lt;interfaces&gt;<br /> &lt;interface name="management"&gt;<br /> &lt;inet-address value="${jboss.bind.address.management:127.0.0.1}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="public"&gt;<br /> &lt;inet-address value="${jboss.bind.address:127.0.0.1}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="unsecure"&gt;<br /> &lt;inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/&gt;<br /> &lt;/interface&gt;<br />&lt;/interfaces&gt;</code></pre><br />
Change this to:<br />
<pre class="prettyprint"><code>&lt;interfaces&gt;<br /> &lt;interface name="management"&gt;<br /> &lt;inet-address value="${jboss.bind.address.management:<strong>10.0.0.27</strong>}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="public"&gt;<br /> &lt;inet-address value="${jboss.bind.address:<strong>10.0.0.27</strong>}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="unsecure"&gt;<br /> &lt;inet-address value="${jboss.bind.address.unsecure:<strong>10.0.0.27</strong>}"/&gt;<br /> &lt;/interface&gt;<br />&lt;/interfaces&gt;</code></pre><br />
10.0.0.27 is master's IP address.</li>
</ol>
</li>
<li>Slave configuration
<ol>
<li>Login to the slave Raspberry pi:<br />
<pre class="prettyprint"><code>~&gt; <strong>ssh 10.0.0.28 -l pi</strong><br />pi@10.0.0.28's password: <strong>raspberry</strong><br /><br />Linux raspberrypi 3.10.25+ #622 PREEMPT Fri Jan 3 18:41:00 GMT 2014 armv6l<br /><br />The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright.<br /><br />Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law.<br />Last login: Tue May 27 21:28:02 2014 from 10.0.0.8<br />pi@raspberrypi ~ $</code></pre><br />
The default password is "raspberry".</li>
<li>Edit "domain.sh" and remove the lines as done for server. This will ensure that non-server, or client, JVM is used for running the domain.</li>
<li>Edit "host.xml"<br />
<code class="prettyprint">vi domain/configuration/host.xml</code>
</li>
<li>Set the host name by changing<br />
<code class="prettyprint">&lt;host name="master" xmlns="urn:jboss:domain:2.1"&gt;</code><br />
to<br />
<code class="prettyprint">&lt;host name="<strong>slave</strong>" xmlns="urn:jboss:domain:2.1"&gt;</code>
</li>
<li>Remove "-server" option by removing lines 75 through 77 as explained above.</li>
<li>Modify <domain-controller> such that slave can connect to master's management port. Change<br />
<pre class="prettyprint"><code>&lt;domain-controller&gt;<br />&lt;local/&gt;<br />&lt;!-- Alternative remote domain controller configuration with a host and port --&gt;<br />&lt;!-- &lt;remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/&gt; --&gt;<br />&lt;/domain-controller&gt;</code></pre><br />
to<br />
<pre class="prettyprint"><code>&lt;domain-controller&gt;<br />&lt;local/&gt;<br />&lt;!-- Alternative remote domain controller configuration with a host and port --&gt;<br /><strong>&lt;remote host="10.0.0.27" port="9999" security-realm="ManagementRealm"/&gt;</strong><br />&lt;/domain-controller&gt;</code></pre><br />
10.0.0.27 is master's IP address.</li>
<li>Change the default <interfaces> from:<br />
<pre class="prettyprint"><code>&lt;interfaces&gt;<br /> &lt;interface name="management"&gt;<br /> &lt;inet-address value="${jboss.bind.address.management:127.0.0.1}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="public"&gt;<br /> &lt;inet-address value="${jboss.bind.address:127.0.0.1}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="unsecure"&gt;<br /> &lt;inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/&gt;<br /> &lt;/interface&gt;<br />&lt;/interfaces&gt;</code></pre><br />
to:<br />
<pre class="prettyprint"><code>&lt;interfaces&gt;<br /> &lt;interface name="management"&gt;<br /> &lt;inet-address value="${jboss.bind.address.management:<strong>10.0.0.28</strong>}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="public"&gt;<br /> &lt;inet-address value="${jboss.bind.address:<strong>10.0.0.28</strong>}"/&gt;<br /> &lt;/interface&gt;<br /> &lt;interface name="unsecure"&gt;<br /> &lt;inet-address value="${jboss.bind.address.unsecure:<strong>10.0.0.28</strong>}"/&gt;<br /> &lt;/interface&gt;<br />&lt;/interfaces&gt;</code></pre><br />
10.0.0.28 is slave's IP address.</li>
</ol>
</li>
</ol>
</li>
<li>Security Configuration
<ol>
<li>Master
<ol>
<li>Using "add-user.sh" script, create a user in Management Realm for master.<br />
<pre class="prettyprint"><code>pi@raspberrypi ~/wildfly-8.1.0.CR2 $ <strong>./bin/add-user.sh</strong><br /><br />What type of user do you wish to add?<br />a) Management User (mgmt-users.properties)<br />b) Application User (application-users.properties)<br />(a): <strong>&lt;ENTER&gt;</strong><br /><br />Enter the details of the new user to add.<br />Using realm 'ManagementRealm' as discovered from the existing property files.<br />Username : <strong>master<br /></strong>Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.<br />- The password should not be one of the following restricted values {root, admin, administrator}<br />- The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)<br />- The password should be different from the username<br />Password : <strong>master<br /></strong>JBAS015269: Password must have at least 8 characters!<br />Are you sure you want to use the password entered yes/no? <strong>yes<br /></strong>Re-enter Password : <strong>master<br /></strong>What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]:<br />About to add user 'master' for realm 'ManagementRealm'<br />Is this correct yes/no? <strong>yes<br /></strong>Added user 'master' to file '/home/pi/wildfly-8.1.0.CR2/standalone/configuration/mgmt-users.properties'<br />Added user 'master' to file '/home/pi/wildfly-8.1.0.CR2/domain/configuration/mgmt-users.properties'<br />Added user 'master' with groups to file '/home/pi/wildfly-8.1.0.CR2/standalone/configuration/mgmt-groups.properties'<br />Added user 'master' with groups to file '/home/pi/wildfly-8.1.0.CR2/domain/configuration/mgmt-groups.properties'<br />Is this new user going to be used for one AS process to connect to another AS process? e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls. yes/no? <strong>no</strong></code></pre>
</li>
<li>Using "add-user.sh" script, create a user in Management Realm for slave. The username must be equal to the name given in the slave's <host> element in "host.xml".<br />
<pre class="prettyprint"><code>pi@raspberrypi ~/wildfly-8.1.0.CR2 $ <strong>./bin/add-user.sh</strong><br /><br />What type of user do you wish to add?<br />a) Management User (mgmt-users.properties)<br />b) Application User (application-users.properties)<br />(a): <strong>&lt;ENTER&gt;</strong><br /><br />Enter the details of the new user to add.<br />Using realm 'ManagementRealm' as discovered from the existing property files.<br />Username : <strong>slave<br /></strong>Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.<br />- The password should not be one of the following restricted values {root, admin, administrator}<br />- The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)<br />- The password should be different from the username<br />Password : <strong>slave<br /></strong>JBAS015269: Password must have at least 8 characters!<br />Are you sure you want to use the password entered yes/no? <strong>yes<br /></strong>Re-enter Password : <strong>slave<br /></strong>What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]:<br />About to add user 'master' for realm 'ManagementRealm'<br />Is this correct yes/no? <strong>yes<br /></strong>Added user 'master' to file '/home/pi/wildfly-8.1.0.CR2/standalone/configuration/mgmt-users.properties'<br />Added user 'master' to file '/home/pi/wildfly-8.1.0.CR2/domain/configuration/mgmt-users.properties'<br />Added user 'master' with groups to file '/home/pi/wildfly-8.1.0.CR2/standalone/configuration/mgmt-groups.properties'<br />Added user 'master' with groups to file '/home/pi/wildfly-8.1.0.CR2/domain/configuration/mgmt-groups.properties'<br />Is this new user going to be used for one AS process to connect to another AS process? e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls. yes/no? <strong>yes<br /></strong>To represent the user add the following to the server-identities definition &lt;secret value="c2xhdmU=" /&gt;</code></pre><br />
The answer to the last question is "yes" as this slave will be connecting to master.</p>
<p>Note that output's last line contains a <secret ...> element. Copy this string as this will need to be used in slave's "host.xml".</li>
</ol>
</li>
<li>Slave
<ol>
<li>Configure "domain/configuration/host.xml" for authentication by changing the security-realms element from:<br />
<pre class="prettyprint"><code>&lt;management&gt;<br /> &lt;security-realms&gt;<br /> &lt;security-realm name="ManagementRealm"&gt;<br /> &lt;authentication&gt;<br /> &lt;local default-user="$local" skip-group-loading="true" /&gt;<br /> &lt;properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/&gt;<br /> &lt;/authentication&gt;</code></pre><br />
to<br />
<pre class="prettyprint"><code>&lt;management&gt;<br /> &lt;security-realms&gt;<br /> &lt;security-realm name="ManagementRealm"&gt;<br /><strong> &lt;server-identities&gt;</strong><br /><strong> &lt;secret value="c2xhdmU=" /&gt;</strong><br /><strong> &lt;/server-identities&gt;</strong><br /> &lt;authentication&gt;<br /> &lt;local default-user="$local" skip-group-loading="true" /&gt;<br /> &lt;properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/&gt;<br /> &lt;/authentication&gt;</code></pre><br />
The <secret ...> element added here is obtained when "add-user.sh" is invoked for slave in the previous step. Slave's host name is "slave", master has a username by "slave", and adding <secret ...> element (base64 code for the password) allows the slave to connect to master.</li>
</ol>
</li>
</ol>
</li>
<li>Change the default username/password for HornetQ otherwise it will throw a pesky warning in the server log as:<br />
<pre class="prettyprint"><code>[Server:server-three] 19:56:54,746 ERROR [org.hornetq.core.server]<br />(Thread-1 (HornetQ-client-netty-threads-<wbr />6928453)) HQ224058: Stopping<br />ClusterManager. As it failed to authenticate with the cluster:<br />HQ119099: Unable to authenticate cluster user:<br />HORNETQ.CLUSTER.ADMIN.USER<br />[Server:server-three] 19:56:54,982 INFO [org.hornetq.core.server]<br />(Thread-6 (HornetQ-server-<wbr />HornetQServerImpl::serverUUID=<wbr />2b53cedf-e6a2-11e3-9d57-<wbr />71d44db5d6ff-27664265))<br />HQ221029: stopped bridge<br />sf.my-cluster.5e79c85f-e69d-<wbr />11e3-a06c-37b4b1e261b8<br />[Server:server-three] 19:56:55,833 ERROR [org.hornetq.core.server]<br />(default I/O-1) HQ224018: Failed to create session:<br />HornetQClusterSecurityExceptio<wbr />n[errorType=CLUSTER_SECURITY_<wbr />EXCEPTION<br />message=HQ119099: Unable to authenticate cluster user:<br />HORNETQ.CLUSTER.ADMIN.USER]</code></pre><br />
Edit "domain/configuration/domain.xml" in master and change:<br />
<pre class="prettyprint"><code>&lt;subsystem xmlns="urn:jboss:domain:messaging:2.0"&gt;<br />&lt;hornetq-server&gt;<br />&lt;cluster-password&gt;${jboss.messaging.cluster.password:CHANGE ME!!}&lt;/cluster-password&gt;</code></pre><br />
to<br />
<pre class="prettyprint"><code>&lt;subsystem xmlns="urn:jboss:domain:messaging:2.0"&gt;<br />&lt;hornetq-server&gt;<br />&lt;cluster-password&gt;<strong>newClusterPassword</strong>&lt;/cluster-password&gt;</code></pre><br />
Make this change in "domain/configuration/domain.xml" for slave as well.</li>
<li>Disable the firewall on master and slave by giving the following command on each host.<br />
<code class="prettyprint">sudo iptables --flush</code>
</li>
<li>Start the master first as:<br />
<code class="prettyprint">./wildfly-8.1.0.CR2/bin/domain.sh</code><br />
The last message in the log will look like:<br />
<code class="prettyprint">[Server:server-three] 17:32:01,641 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.CR2 "Kenny" started in 137565ms - Started 321 of 430 services (186 services are lazy, passive or on-demand)</code><br />
After the master has completely started, start the slave as:<br />
<code class="prettyprint">./wildfly-8.1.0.CR2/bin/domain.sh</code><br />
The last message in the log will look like:<br />
<code class="prettyprint">[Server:server-three] 17:35:43,947 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.CR2 "Kenny" started in 127576ms - Started 322 of 431 services (187 services are lazy, passive or on-demand)</code><br />
The server log for master will also show a message indicating that the slave is now a registered host with master:<br />
<code class="prettyprint">[Host Controller] 17:33:31,545 INFO [org.jboss.as.domain] (Host Controller Service Threads - 56) JBAS010918: Registered remote slave host "slave", WildFly 8.1.0.CR2 "Kenny"</code>
</li>
<li>Deploy the application
<ol>
<li>Check out a simple application that puts/gets some HTTP session data.<br />
<code class="prettyprint">git clone https://github.com/arun-gupta/wildfly-samples.git</code><br />
This workspace has other samples related to WildFly. But the one that we care about it is in "clustering/http" directory. Change to that directory and build the sample as:<br />
<pre class="prettyprint"><code>cd clustering/http<br />mvn package</code></pre><br />
This will generate a WAR file in the "target" directory.</li>
<li>"jboss-cli" is a Command Line management tool for standalone and managed domains. It is bundled as a script in the "bin" directory. It can connect to remote servers as well. Now that we've Domain Controller running on master and Host Controller running on slave, lets connect it from your local machine as:<br />
<pre class="prettyprint"><code><strong>~/tools/wildfly-8.1.0.CR2/bin/jboss-cli.sh -c --controller=10.0.0.27:9990</strong><br />Authenticating against security realm: ManagementRealm<br />Username: <strong>master</strong><br />Password: <strong>master</strong><br />[domain@10.0.0.27:9990 /]</code></pre><br />
10.0.0.27 is IP address of the master. The generated WAR file can now be deployed as:<br />
<code class="prettyprint">[domain@10.0.0.27:9990 /] deploy http-1.0-SNAPSHOT.war --server-groups=other-server-group</code><br />
This shows the following output in master's log:<br />
<pre class="prettyprint"><code>[Host Controller] 20:20:27,058 INFO [org.jboss.as.repository] (management-handler-thread - 6) JBAS014900: Content added at location /home/pi/wildfly-8.1.0.CR2/domain/data/content/c4/f73f651f03c68d3a7b29686519e6142bccdfa4/content<br />[Server:server-three] 20:20:31,497 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "http-1.0-SNAPSHOT.war" (runtime-name: "http-1.0-SNAPSHOT.war")<br /><br />. . .<br /><br />[Server:server-three] 20:20:38,351 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 71) JBAS010281: Started dist cache from web container<br />[Server:server-three] 20:20:38,361 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 70) JBAS010281: Started default-host/http-1.0-SNAPSHOT cache from web container<br />[Server:server-three] 20:20:38,599 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) JBAS017534: Registered web context: /http-1.0-SNAPSHOT<br />[Server:server-three] 20:20:42,474 INFO [org.jboss.as.server] (ServerService Thread Pool -- 68) JBAS018559: Deployed "http-1.0-SNAPSHOT.war" (runtime-name : "http-1.0-SNAPSHOT.war")</code></pre><br />
And slave's log shows:<br />
<pre class="prettyprint"><code>[Server:server-three] 20:20:31,475 INFO [org.jboss.as.server.deployment] (MSC service thread 1-2) JBAS015876: Starting deployment of "http-1.0-SNAPSHOT.war" (runtime-name: "http-1.0-SNAPSHOT.war")<br />[Server:server-three] 20:20:35,690 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 70) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.<br /><br />. . .<br /><br />[Server:server-three] 20:20:41,028 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 71) JBAS010281: Started default-host/http-1.0-SNAPSHOT cache from web container<br />[Server:server-three] 20:20:41,214 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) JBAS017534: Registered web context: /http-1.0-SNAPSHOT<br />[Server:server-three] 20:20:42,472 INFO [org.jboss.as.server] (ServerService Thread Pool -- 69) JBAS018559: Deployed "http-1.0-SNAPSHOT.war" (runtime-name : "http-1.0-SNAPSHOT.war"</code></pre>
</li>
</ol>
</li>
</ol>
<p>Accessing the application on master (<a href="http://10.0.0.27:8330/http-1.0-SNAPSHOT/index.jsp" title="http://10.0.0.27:8330/http-1.0-SNAPSHOT/index.jsp">http://10.0.0.27:8330/http-1.0-SNAPSHOT/index.jsp</a>) shows:</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/master-default-output-techtip27.png"><img class="alignnone wp-image-11581" alt="master-default-output-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/master-default-output-techtip27.png" width="366" height="301" /></a></p>
<p>Accessing the application on slave (<a href="http://10.0.0.28:8330/http-1.0-SNAPSHOT/index.jsp" title="http://10.0.0.28:8330/http-1.0-SNAPSHOT/index.jsp">http://10.0.0.28:8330/http-1.0-SNAPSHOT/index.jsp</a>) shows:</p>
<p><a href="http://blog.arungupta.me/wp-content/uploads/2014/05/slave-default-output-techtip27.png"><img class="alignnone wp-image-11582" alt="slave-default-output-techtip27" src="http://blog.arungupta.me/wp-content/uploads/2014/05/slave-default-output-techtip27.png" width="370" height="314" /></a></p>
<p>So we could easily deploy a Java EE application to multiple WildFly instances, running on RaspberryPi, in managed domain mode, with a single command. How cool ?</p>
<p>Next blog will explain how to setup cluster on these multiple instances.</p>
http://www.java.net/blog/arungupta/archive/2014/05/30/wildfly-managed-domain-raspberry-pi-tech-tip-27#commentsBloggingBlogsCommunityIoTJ2EEJava EnterpriseFri, 30 May 2014 17:17:50 +0000arungupta903435 at http://www.java.netJBoss User Group Worldwide: Learn JBoss Technologies using G+ Hangouthttp://www.java.net/blog/arungupta/archive/2014/05/28/jboss-user-group-worldwide-learn-jboss-technologies-using-g-hangout
<!-- 63 | 0 --><img src="http://www.java.net/sites/default/files/arun-photo3_0.png" border="0", align="left" /><p><a href="http://www.meetup.com/JBoss-User-Group-Worldwide/"><img class="alignnone size-large wp-image-11552" alt="jbug-worldwide-logo" src="http://blog.arungupta.me/wp-content/uploads/2014/05/jbug-worldwide-logo-1024x158.png" width="640" height="98" /></a></p>
<p>A JBoss User Group (JBUG) is a group of people who share a common interest in JBoss technologies. They are organized and supported by the community and meet on a regular basis to discuss new technologies, development methodologies, interesting use cases, and other technical topics. The common goal is to provide education, help, and social events for the community and to promote open source. There are several <a href="http://www.jboss.org/usergroups">JBUGs around the world</a> and you can always start a new one in your local community.</p>
<p><a href="http://www.meetup.com/JBoss-User-Group-Worldwide/">JBUG Worldwide</a> is like any other JBUG but the events will mostly take place virtually, using a Google Hangout. This allows us to reach to a broader set of audience, the presentations are recorded, and available for replay on the <a href="https://www.youtube.com/channel/UCrAKwbOeiDKtTShxLCKFvNg">youtube channel</a>. To our audience, it provides access to world-class speakers from Red Hat and rest of the JBoss community, which otherwise may not be easily accessible. This effort is initiated by <a href="https://community.jboss.org/groups/JBUG-newcastle">JBUG Newcastle</a> and so sometimes the meeting may coincide with a physical meeting at that JBUG.</p>
<p>You may find stark similarity with <a href="http://www.meetup.com/virtualJUG/">vJUG</a>, and that is indeed true! However vJUG will continue to focus on broader Java topics and may feature some widely used JBoss technologies. But JBUG Worldwide will cover a wider range of JBoss technologies, and some of the niche ones as well.</p>
<p>The first session was on <a href="http://www.meetup.com/JBoss-User-Group-Worldwide/events/179386632/">LiveOak: Is that a mobile backend as a service in your pocket ?</a> and the recording is available:</p>
<p><iframe src="//www.youtube.com/embed/NO-2S5p_1mk" height="360" width="640" allowfullscreen="" frameborder="0"></iframe></p>
<p>Several other sessions are already lined up for this year, and more are in the pipeline:</p>
<ul>
<li><a href="http://www.meetup.com/JBoss-User-Group-Worldwide/events/180463202/">Code-driven introduction to Java EE 7</a> (Jun 17)</li>
<li><a href="http://www.meetup.com/JBoss-User-Group-Worldwide/events/180464912/">What's new in WildFly 8 ?</a> (Jul 8)</li>
<li><a href="http://www.meetup.com/JBoss-User-Group-Worldwide/events/183724422/">Testing the Enterprise layers, with Arquillian</a> (Oct 21)</li>
<li><a href="http://www.meetup.com/JBoss-User-Group-Worldwide/events/183733082/">Cast studies in Testable Java EE Development</a> (Nov 18)</li>
</ul>
<p>We would love hear to your feedback about speaker/topics, streamlining the process, or any thing else. We need you to make this successful!</p>
<p>Ping <a href="http://twitter.com/Pfrobinson">@Pfrobinson</a> or myself (<a href="http://twitter.com/arungupta">@arungupta</a>) for any questions/comments.</p>
http://www.java.net/blog/arungupta/archive/2014/05/28/jboss-user-group-worldwide-learn-jboss-technologies-using-g-hangout#commentsBloggingCommunityJ2EEJava User GroupsWed, 28 May 2014 21:25:33 +0000arungupta903368 at http://www.java.net