A co-worker of mine recently ran across this post by Eamonn McManus, “Using the builder pattern with subclasses,” and after due consideration, settled on what McManus calls “a shorter, smellier variant.”

Shorter because:

it has only one Builder class per class

Smellier because:

it uses raw types

the builder’s self() method requires an unchecked cast

Frankly, I don’t find the shortness here a compelling argument for the smell, but I also think there’s more shortness to be found in McManus’ design while remaining fragrant. Thus:

Share this:

Like this:

It has been a somewhat short sprint with not a lot to talk about. The interface is getting more and more usable, which is a good thing since I’m using it to manage the sprints. The filters and comparators and really helping with this.

Server instance name

the lapis start program has been modified to take a second argument which is used to distinguish between multiple running instances. The argument is simply there to appear on the command line when checking the running processes as shown below:

Before this was introduced, there was no real way to find out which process was for which instance. Now I know…

Due Date & Priority

I’ve added the due date and priority to workflows and tasks to complete the move towards Getting Things Done. Along with that, I created due date filters and comparators and priority filters and comparators.

Share this:

Like this:

It just never stops! Lapis Server 1.2.8 is now done, but it’s a release that is part of a bigger program of work, so I’m straight on 1.2.9

I had to create a few things to make it all available as a software platform.

a Signup process (signup form, signup servlet, signup workflow (it’s pretty nice that after signup up to a workflow engine, the workflow engine uses a workflow to send you a nice email – it uses an email task!)

Some terms and conditions and a privacy policy

A new stylesheet

Changes to the login process to include “forgot username”, “forgot password” functionalities. Again, both of these use a workflow to send the emails

Rewrite rules to make sure you can’t see all the workflows you’re not supposed to.

Role based security constraints to match what will be the various packages. The tomcat realm class now reads the role of the user so the web app has the same roles as the Lapis server.

Web Interface improvements, such as moving the tasks link into its own menu, links to the privacy policy and terms and conditions

A task status filter (active, completed, rejected, all) which somehow had been missed out in the design process and is DEFINITELY required to make it user friendly. The default is “active”, so that the task link is effectively a link to your current to-do list.

An available workflows datasource to make it more user friendly by selecting workflows from a drop-down list instead of typing them in. The datasource only lists the files the user has access to.

Changes to the server to be able to bind itself to a specific network interface in multi-home network environments such as that of the redhat cloud.

Changes to the client command line tools to bind themselves to a specific network interface in multi-home network environments such as that of the redhat cloud.

what’s next?

as part of my 1.2.9 sprint, I am looking at finishing the integration of the redhat environment (with the shutdown and startup scripts and a whole round of testing) and send a link to a few people to do a soft launch and gather feedback before getting me a real domain name!
After that, I’ll see how to get a payment solution in place to offer upgrades and start the process of making changes towards working with an organisation’s groups.

Attached JCR nodes

Attached nodes are now also supported by the wfstart.sh command line tool via a “lapis.attached={path}” argument. The new wfupdate.sh command line tool also supports attaching and detaching jcr nodes via the -attach and -detach options.

The web application allows to browse the repository and select nodes to attach.

LapisServer on RedHat OpenShift and Saas

I’ve started placing a new version of the web app on OpenShift, the Redhat cloud server. This is to be able to offer the software as a service. I still have to define my pricing plan, but the free option will enable users to instantiate a “todo” workflow. the first upgrade other options would enable users to instantiate a “assign” workflow where groups and group tasks would be enabled. The next upgrade option would see the introduction of custom workflows where email tasks would be enabled. The final upgrade option after that would be a custom instance of the server with our support.

Multi-homed environments

An interesting side effect of looking at the Redhat Openshift was that the servers have multiple IP addresses and the default RMI configuration was of course picking the one which had security restrictions. It is now possible to add a server property named “bindAddress” which forces the Lapis server to listen on a particular network interface. Likewise for the client tools, the local end of the socket can be made to pick the correct IP address when connecting to the server.

Audit trail

In addition to the workflows and the tasks showing an audit trail of when they are created, activacted, rejected and completed, all the events ( such as when a property is changed or a JCR node attached) are now stored, thus giving a complete audit trail of what happened during the life of the workflow.

What’s next?

I will be focusing on the Saas web app for signup and the todo workflow (I’m already 2/3rds through so it shouldn’t take long) and then start on the assignment workflow and find out what that means about the groups.

Share this:

Like this:

I am looking at connecting the web application to JCR repositories at the moment so that I can “attach” JCR nodes to workflows (and also possibly watch nodes that are created/modified/deleted as trigger points for instantiating workflows). I’ve had a few issues along the way but the base logic is there to be able to instantiate JCR sessions from the details located in the Lapis Server.

Classpath

Make sure those jars are on the classpath. they were downloaded from the Maven repository or copied from within the jackrabbit standalone jar file

Storing repository details

A file containing the repositories can be created and be referenced by the server property lapis.repositories.config (we set ours to etc/repositories.xml). Below is an example repositories file “mounting” 2 JCR repositories of a standard CQ5 installation. note that the passwords are encrypted.

The code above first retrieves the Lapis “mount points” of the JCR repositories (in our case, we have 2 “//jackrabbit/cq5/dev/author” and “//jackrabbit/cq5/dev/publish”) which is than passed the JackRabbit utility class to retrieve the repository. In order to get a session to the repository, we must get the username and password, which is provided by the client calls to getJCRCryptedCredentials. The method gets given a path (it can be anything below a mount point, so //jackrabbit/cq5/dev/author/var/audit/com.day.cq.wcm.core.page/content/home/6efc5a6b-f361-4d6a-9f5d-43a7ffc862ab works just as well as //jackrabbit/cq5/dev/author) to resolve the credentials for us from the encrypted details in the repositories file.

All we need to do then is call the login method of the repository object and get connected. I have in mind that the username/password combination would be for a read-only user and that another method would be available to pass your own username/password (This is yet to be implemented). Below is a screenshot of the development environment in the process of dumping data from the JCR repository (something I will have to replicate when I implement the changes for the web application).

Attaching JCR nodes to the workflows

In the very near future, it will be possible to attach a reference to a JCR node to workflows. The paths would be the Lapis “mounted” paths but I will be providing a path modulator/demodulator to translate between the path as the Lapis server would know it across multiple repositories and the path within the repository.
For example, the path //jackrabbit/cq5/dev/author/var/audit/com.day.cq.wcm.core.page/content/home/6efc5a6b-f361-4d6a-9f5d-43a7ffc862ab of an attached node in a lapis workflow could be translated into /var/audit/com.day.cq.wcm.core.page/content/home/6efc5a6b-f361-4d6a-9f5d-43a7ffc862ab so that a call can be made using the standard JCR API from the workflow. Once I have this developed, I’ll post so that this is documented.

Why is that a good thing?

Well, when I get this implementation completed, it would start to open the workflow engine to a log of “enterprise” level systems (and the companies that use them!). Anything that uses Apache JackRabbit, JBoss modeShape, Adobe CQ, IBM WCM, to name but a few could now gain a workflow engine with very little effort. I believe this would definitely help put the Lapis Engine on the map.