Thursday, January 29, 2015

Introduction

Initially, this was supposed to be a short introduction about the topic in the title and an opportunity for me to get to know Apache Karaf Cellar.
Unfortunately, I couldn't finish the topic until today because I had some unexpected problems.
So basically this is going to be a post about the problems I encountered.
At the end you'll find a TL;DR; if you just want to get started.

Short introduction into the Configuration Admin Service

From the OSGi wiki:
"Configuration Admin is a service which allows configuration information to be passed into components in order to initialise them, without having a dependency on where or how that configuration information is stored."(http://wiki.osgi.org/wiki/Configuration_Admin)

Basically you write a key-value property and a service which can use it.
All the "magic" is done by the ConfigurationAdminService, which is part of the OSGi Compendium Specification.
A good introduction can be found here.
Also the Admin will store it somewhere for you.

Set up your Apache Karaf

For my example I want to use my MangedProcessEngineFactory from the 1.1.0-SNAPSHOT version of camunda BPM OSGi.
You can just clone the repository on GitHub and built it with mvn install.

Because I am quite lazy I started two Karaf instances on my laptop. If you want to do that, too, you'll have to change some port numbers for the second Karaf instance.
First, the ports in the etc/org.apache.karaf.management.cfg:

rmiRegistryPort
rmiServerPort

Second the SSH port in the etc/org.apache.karaf.shell.cfg (forgetting this caused me a some trouble).
Next we gotta install Cellar on each Karaf instance.
Because we want to use the current version, we'll use version 3.0.1 of Cellar.
You can find the general installation guide here for instructions about installation and start.
Basically you just have to call from the Karaf console

If you somehow plan to build Cellar yourself, I'll recommend to comment out the "samples" module in the root POM.
All your Karaf instances should discover each other automatically.
Now we got to install and share the camunda-feature (or whichever you want to use) into the cluster.

Install and share a feature

To do this task we have two choices. One would be to activate the listeners in every Karaf instance and use the "basic" commands.
Therefore you'll have to set the bundle listener value in the org.apache.karaf.cellar.node.cfg to true (we won't need the other ones in this example):

Those commands work like the basic ones but you always have to provide a group.

You should see that the feature got installed on both Karaf instances (check e.g. with features:list | grep -i camunda).
Now we need a database.

Setting up the database

I gotta admit, this is were my first problems occurred. Starting from funny and ending at a being a little bit annoyed.
My first problem was that I tried to use the in-memory version of H2. This won't work because, logically, every Karaf instance runs in its own JVM.
So, because of multiple applications, I started h2 in server mode (see here for more information).

java -cp h2*.jar org.h2.tools.Server jdbc:h2:tcp://localhost/~/test

The next problem was that because of some exceptions the ProcessEngines started and stopped in seemingly random orders.
Having the databaseSchemaUpdate property set to create-drop caused problems with tables not being present because of random dropping/creating.
I recommend to create the tables yourself (here are the sqls).

This didn't solve all of my database problems. I suspected H2 of not being capable of handling the same user logging in twice (which it is capable of as far as I know now).
After that I switched to MySQL.

Setting up MySQL in Karaf

MySQL is a little bit more complicated to set up than H2 because we have to create a proper datasource.
First, we need to install Apache Karaf DataSources:

The datasource create command has to be executed on both Karafs because the datasource-*.xml that'll be created in the deploy directory won't be copied.
For the ProcessEngine to be able to find the MySQL datasource it needs a JNDI name.
To give a datasource a JNDI name we need Apache Karaf Naming.

feature:install jndi

Now the datasource will automatically get a JNDI name (check with jndi:names). If you don't see the jndi:* commands you'll have to install the feature manually on the second Karaf.

Finally we need the MySQL connector jar. We can find it here. Simply drop the jar into the deploy directory.

The MySQL database works fine for me so far. Let's take a look at the configuration file.

The configuration file

When I started with this "experiment" I thought that making the use of the etc/ directory in Karaf would be a good idea but now I gotta say:
Please, don't try to do this file based. I tried a lot of combinations and it didn't work out.
The closest I got was the configuration arriving on both Karafs but only one engine being created. Jean-Baptiste and Achim were really trying to help me on the mailing list. Nevertheless, I couldn't get it running. You are free to try.

Karaf watches the etc/ directory for configuration files. To deploy one for the ManagedProcessEngineFactroy you'll have to name it
org.camunda.bpm.extension.osgi.configadmin.ManagedProcessEngineFactory-1.cfg.

I switched to a bundle which contains the configuration.

The configuration bundle

As mentioned before, for a ManagedServiceFactory to create a service it needs one or more configurations.
We'll use a simple version of the configuration:

0
comments:

Post a Comment

Pageviews Last Month

Follow by Email

About Me

Former business informatics student, former software engineer and currently student again to receive my master's degree in computer science.Interested in many things related to software development, distributed systems and software architecture.