Canonical Voices

What Robert Ayres talks about

In my previous post, we added Memcached to our cluster. In this post, I’ll write a bit more about the Tomcat configuration options that are available including JMX monitoring. I’ll also show how easy it is to enable session clustering.

Configuration and Monitoring

All charms come with many options available for configuration. Each is selected to allow the same tuning you would typically perform on a manually deployed machine. Configuration options are shown per charm when browsing the Charm Store (jujucharms.com/charms/precise). The Tomcat charm provides numerous options. For example, to tweak the JVM options of a running service:

juju set tomcat "java_opts=-Xms768M -Xmx1024M"

This sets the Java heap to a miminum and maximum of 768Mb and 1024Mb respectively. If you are debugging an application, you may also set:

To create a ‘.hprof’ Java heap dump you can inspect with VisualVM or jhat each time an OutOfMemoryError occurs.

To open a remote debugger:

juju set tomcat debug_enabled=True

This will open a JDWP debugger on port 8000 you can use to step-through code from Eclipse, Netbeans etc. (Note: The debugger is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 8000:localhost:8000 ubuntu@xxx.computer.amazonaws.com’, then connect your IDE to localhost port 8000).

This will start a remote JMX listener on ports 10001, 10002 and set passwords for the ‘monitorRole’ and ‘controlRole’ users (not setting a password disables that account). You can now open VisualVM or JConsole to connect to the remote JMX instance (screenshot below). (Note: JMX is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 10001:localhost:10001 -L 10002:localhost:10002 ubuntu@xxx.computer.amazonaws.com’, then connect your JMX client to port 10001). You can easily expose your own application specific MBeans for monitoring by adding them to the platform MBeanServer.

Options are applied to services and to all units under a service. It isn’t possible to apply options to a specific unit. So if you enable debugging, you enable it for all Tomcat units. Same with Java options.

Options may also be applied at deployment time. For example, to use Tomcat 6 (rather than the default Tomcat 7), create a ‘config.yaml’ file containing the following:

tomcat:
tomcat_version: tomcat6

Then deploy:

juju deploy --config config.yaml cs:~robert-ayres/precise/tomcat

All units added via ‘add-unit’ will also be Tomcat 6.

Session Clustering

Previously, we setup a Juju cluster consisting of two Tomcat units behind HAProxy. In this configuration, HTTP sessions exist only on individual Tomcat units. For many production setups, the use of load balancer sticky sessions and a non-replicated session is the most performant where HTTP sessions are either not required or expendable in the event of unit failure. For setups concerned about availability of sessions, you can enable Tomcat session clustering on your Juju service which will replicate session data between all units in the service. Should a unit fail, any of the remaining units can pickup the subsequent requests with the previous session state. To enable session clustering:

juju set tomcat cluster_enabled=True

We have two choices of how the cluster manages membership. The preferred choice is using multicast traffic, but as EC2 doesn’t allow this, we must use static configuration. This is the default, but you can switch between either method by changing the value of the ‘multicast’ option. Like everything else Juju deployed, any new units added or removed via ‘add-unit’ or ‘remove-unit’ are automatically included/excluded from the cluster membership. This easily allows you to toggle clustering so that you can benchmark precisely what latency/throughput cost you have by using replicated sessions.

In summary, I’ve shown how you can tweak Tomcat configuration including enabling JMX monitoring. We’ve also seen how to enable session clustering. In my final post of the series, I shall show how you can add Solr indexing to your application.

This will map the ‘memcached’ service under the JNDI name ‘param/Memcached’. Whilst Memcached is deploying, you can add the relation ahead of time:

juju add-relation tomcat memcached

We will use the excellent Java Memcached library Spy Memcached (code.google.com/p/spymemcached/) in our application. Download the ‘spymemcached-x.x.x.jar’ and copy it to ‘juju-example/lib’.
Now edit ‘juju-example/grails-app/conf/spring/resources.groovy’ so it contains the following:

Once redeployed, you should be able to open http://xxx.compute.amazonaws.com/juju/memcachedCount and refresh the page to an incrementing counter, stored in Memcached.

As with our datasource connection, we utilise a JNDI lookup to instantiate our Memcached client using runtime configuration provided by Juju (a space separated list of Memcached units, provided as a JNDI environment parameter). With this structure, the developer has total control over integrating external services into their application. If they want to use a different Memcached library, they can use the Juju configuration to instantiate a different class.

If we want to increase our cache capacity, we can add more units:

juju add-unit -n 2 memcached

This will deploy another 2 Memcached units. Our Tomcats will update to reflect the new units and restart.(Note: As you add Memcached units, our example counter may appear to reset as its Memcached key is hashed to another server).

We’ve added Memcached to our Juju cluster and seen how you can integrate external services within your application using JNDI values.
In my next post, I’ll write about how we can enable features of our existing cluster like JMX and utilise Tomcat session clustering.

In my previous post I gave an introduction to Juju, the new deployment tool in Ubuntu 12.04 Precise Pangolin. This post is the first of four demonstrating how you can deploy a typical Java web application into your own Juju cluster. I’ll start the series by deploying an initial cluster of HAProxy, Tomcat and MySQL to Amazon EC2, shown in the diagram below. You can always deploy to a different environment than EC2 such as MAAS or locally using LXC. The Juju commands are equivalent.

For this demo I’ll build a sample application using the excellent Grails framework (grails.org). You can of course use traditional tools of Maven, Ant, etc. to produce your final WAR file. If you want to try the demo yourself, you’ll need to install Grails and Bazaar.

This will deploy a Tomcat unit under the service name ‘tomcat’. Like the bootstrap instance, it will take a short time to launch a new instance, install Tomcat, configure defaults and start. You can check the progress with ‘juju status’. When deployed you should see the following output (‘machines’ information purposely removed):

Should you wish to investigate the details of any unit you can ssh in – ‘ssh ubuntu@xxx.compute.amazonaws.com’ (Juju will have transferred your public key).

The Tomcat manager applications are installed and secured by default, requiring an admin password to be set. We can apply configuration to Juju services using ‘juju set <service> “<key>=<value>” …’. To set the ‘admin’ user password on our Tomcat unit:

juju set tomcat "admin_password=<password>"

Our Tomcat unit isn’t initially exposed to the Internet, we can only access it over a ssh tunnel (see ssh ‘-L’ option). To expose our Tomcat unit to the Internet:

juju expose tomcat

Now you should be able to open your web browser at http://xxx.computer.amazonaws.com:8080/manager and login to Tomcat’s manager using the credentials we just set.
If we prefer our unit to run on a more traditional web port:

juju set tomcat http_port=80

After a small time of configuration you should now be able to access http://xxx.computer.amazonaws.com/manager with the same credentials.
Over HTTP, our credentials aren’t transmitted securely, so let’s enable HTTPS:

Our Tomcat unit will listen for HTTPS connections on the traditional 443 port using a generated self-signed certificate (to use CA signed certificates, see the Tomcat charm README). Now we can securely access our manager application at https://xxx.computer.amazonaws.com/manager (you need to ignore any browser warning about a self-signed certificate). We now have a deployed Tomcat optimised and secured for production use!

Now let’s turn our attention to evolving a simple Grails application to demonstrate further Juju abilities.

With a working Grails installation, create ‘juju-example’ application:

grails create-app juju-example

This will create your application in a directory ‘juju-example’. Inside is a shell of a Grails application, enough for demonstration purposes.

To suit the directory layout of our deployed Tomcat, we should adjust our application to store stacktrace logs in a designated, writable directory. Edit ‘juju-example/grails-app/conf/Config.groovy’ and inside the ‘log4j’ block add the following ‘appenders’ block:

This will build a deployable WAR file ‘juju-example/target/juju-example-0.1.war’.

You have secure access to deploy WAR files directly using the Tomcat manager, but there is a better way – using the J2EE Deployer charm.

The J2EE Deployer charm is a subordinate charm that essentially provides a Juju controlled wrapper around deploying your WAR file into a Juju cluster. This has the distinct advantage of allowing you to upgrade multiple units using a single command as is shown later. To use the J2EE Deployer, first download a copy of the wrapper for our example application using bzr:

This will create a local copy of the wrapper under a directory ‘precise/j2ee-deployer’. The ‘precise’ parent directory is necessary for Juju when using locally deployed charms.
Copy our war file to the ‘deploy’ directory within:

As with other charms, this will securely upload our application into S3 storage for use by any of our Juju services. Once the deploy command returns, our application should be available within the cluster under the service name ‘juju-example’. To deploy to Tomcat, we relate the services:

This is a colon separated value that maps the requested database ‘juju’ of the ‘mysql’ service under a JNDI name of ‘jdbc/JujuDB’. The set of values after the final colon set DBCP connection pooling options. Here we specify a dedicated pool of 20 connections.
Once our MySQL unit is deployed, we relate our Tomcat service:

juju add-relation tomcat mysql

During this process, our Tomcat unit will request the use of database juju. Our MySQL unit will create the database and return a set of generated credentials for Tomcat to use. Once complete, our pooled datasource connection is available to our Tomcat application under JNDI – ‘java:comp/env/jdbc/JujuDB’. To demonstrate its use within our application, firstly configure Grails to use JNDI for its datasource connection. Within ‘juju-example/grails-app/conf/DataSource.groovy’, inside the ‘production’/’dataSource’ block, add ‘jndiName = “java:comp/env/jdbc/JujuDB”‘ so it reads as follows:

This will upload our revised application into S3 again and then deploy to all related services, restarting them in the process.
With our newly deployed application utilising its local JNDI datasource, we can now open our web browser at http://xxx.compute.amazonaws.com/juju/book/list and use the generated page to perform CRUD operations on our Book objects, all persisted to our MySQL database.

A key point to be made is how you should develop your application to be cloud deployable. If the application is developed to utilise external resources via runtime lookups, the application may be deployed to any number of Juju clusters. You can observe this yourself by adding a relation between your application and any other Tomcat services.

For this post’s finale, let’s show how we can scale Tomcat.
First, deploy the HAProxy load balancer:

juju deploy haproxy

And associate with Tomcat:

juju add-relation haproxy tomcat

Unexpose Tomcat and expose HAProxy:

juju unexpose tomcat
juju expose haproxy

We can now use the public address of HAProxy to access our application.
Now we’re behind a load balancer, its simple to bolster our web traffic capacity by adding a further Tomcat unit:

juju add-unit tomcat

A second Tomcat unit will be deployed and configured as the first. Same open ports, same MySQL connection, same web application. Once deployed, HAProxy will serve traffic to both instances in round robin fashion. Any future application upgrades will occur on both Tomcat units. If we want to remove a unit:

juju remove-unit tomcat/<n>

where ‘<n>’ is the unit number (shown in status output).

That’s the end of the demo. Should you wish to destroy your cluster, run:

juju destroy-environment

This will terminate all EC2 instances including the bootstrap instance.

To summarise, I’ve shown how you can create a Juju cluster containing a load balanced Tomcat with MySQL, serving your web application. We’ve seen how important it is for the application to be cloud deployable allowing it to utilise managed relations. I’ve also demonstrated how you can upgrade your application once deployed.

How would you go about automating deployment of this Java based cluster to EC2? Utilise Puppet or Chef? Write your own scripts? How would you adapt your solution to add or remove servers to scale on demand? Can your solution support deployment to your own equipment? If the solutions that come to mind require a lot of initial time investment, you may be interested in Juju (juju.ubuntu.com).

In upcoming posts, I’ll show how you can use Juju to deploy this cluster. But for this post, I’ll give a brief Juju introduction.

Juju is a new Open Source command line deployment tool in Ubuntu 12.04 Precise Pangolin. It allows you to quickly and painlessly deploy your own cluster of applications to a cloud provider like EC2, on your own equipment in combination with Ubuntu MAAS (Metal as a Service – wiki.ubuntu.com/ServerTeam/MAAS), or even on your own computer using LXC (Linux Containers). Juju deploys ‘charms’, scripts written to deploy and configure an application on an Ubuntu Server.
The real automated magic happens through charm relations. Relations allow charms to associate to perform combined functionality. This behaviour is predetermined by the charm author through the use of programmable callbacks. For example, a database will be created and credentials generated when associating with a MySQL charm. Charms utilise relations to provide the user with traditional functionality that requires no knowledge of underlying networks or configuration files. And as the focus isn’t on individual machines, Juju allows you to add or remove further servers easily to scale up or down on demand.

Sound interesting? In my next post I’ll demonstrate deploying a web application to Tomcat and connecting it to MySQL.