Coding Tech-Talk

Pages

Categories

Times ago we start developing a fun tool to provoke extreme situations like out of memory or stack trace exception. Now I added a new feature to stress the CPU. I also count the used time on the CPU to see how much CPU time is ‘available’.

Using the tool is a good indicator to show how much CPU time is available for the JVM. Specially in virtual environments this information is useful. But you need to set parameters carefully. Don’t stress the CPU too long at once and use a sleep between the test cycles. So the hypervisor is not moving other VM guests out of your system and the result is valide.

If you develop karaf related bundles in Eclipse IDE or another Java IDE the common way is to create maven driven projects and let maven manage dependencies and build.

Since karaf 4.0 you need to use a special maven plugin to parse the classes to find and automatic register Services. The plugin is called ‘karaf-services-maven-plugin’ and is working for every build.

Eclipse is using ‘Maven Builder’ to organise and build java classes in the background. So you can see errors while you are working on the files and detect compile problems fast. Therefore the Maven Builder is called for every ‘Automatic Project Build’ e. g. if you save files, start Eclipse and start exports / maven builds.

I found that the performance of the automatic build rapide slow down if I start using the maven plugin. In fact every saving need 60 seconds. I have 48 maven related project in my workspace. Starting of eclipse keeps me from work for minimum 30 minutes!

Not very happy about this behaviour I was searching for solutions. I reduced the usage of the karaf plugin and switched off automatic project builds in eclipse and closed unused projects. But with the effect that code changes where no more be refactored thru the source code. Not the solution I wanted.

Yesterday I came back to this plage and started downloading the code, tracing the execution time and looking for the performance. I find out that the Apache ClassFinder is part of the problem. Calling it eats most of the time and I found that the plugin collects all the dependencies/artefacts to be parsed for by the ClassFinder.

The simplest way to reduce the runtime problem is to reduce the amount of parsed classes. So I just did it. I created a filter option for the collected artefacts and set the filter to the minimum amount of artefacts needed for the ClassFinder. Recommended are extended classes. To have a fast solution I created a fork of the karaf plugin and implemented my solution. Without using the filter it will work like before.

Since Java 8 it’s possible to use references to methods like it is in most other languages already standard. The notation is Class::MethodName, e.g. MyClass::myMethod

But the new Feature is not what it promise. The developer would expect to get a reference object like Method to work with the referenced method. But Java did not implement real References it packs the reference in a calling lambda expression. Something like (o) -> o.myMethod() and returns a reference to this lambda construct.

What a stupid behaviour!

In this way it’s not possible to get any information about the referenced method. Not the name or expected return type etc.

Currently I store the child order information in the child properties. This means the child hold a ‘sort’ property which shows how to to sort this node into the list of children nodes.

This strategy shows a lot of problems. First of all if I try to change the order I have to change all child nodes. This could end in a access denied problem if I do not have access to one of the child nodes. Second if I move the child node a stare ‘sort’ parameter will maybe disturb the new order information.

Therefore the best and mostly not used strategy is to store the order information at the paren node. If you are able to write the parent node, you are able to reorder the children of the node. And you are not forced to change these ones.

To create a new portal framework is a long planed project for me. There are several reasons for it. Last I tried to use apache sing as a portal to use for own projects but finally I had to drop this idea. But I get a lot of impressions from different portal software I had already used and will try to bring the good together.

First try is out now. The last weeks I spent most time in specification. Specially how the http request and resource resolving should work to solve a wide range of requirements.

In the current version a demo application shows a simple website and if you have imagine you could see … what ever.

Resolving a renderer for a resource is not as simple as it seams. The team from apache sling shows me that the rendering is more complex and should be more then a simple content output.

In modern WCM a resource is an abstract thing containing more meta data then pure content. All the meta data together brings a useful content to the user. And there are different ways to present it. Html is the visible presentation of the data. Json and xml are technical presentations needed to download data in the background. Sling shows that we can have different renderer for the same content. It depends on the current use case.

How to find the correct content renderer is an interesting question. Sling are using request parameters like ‘request method’ and parts of the requested path to find a resource. Parameters from the resource are linking to the correct script rendering the content (see this picture).

Playing around with Bonita sub-processes gave me a couple of interesting discoveries…

The first step was to create a sub-process by selecting tasks and using the context menu to create a new sub-process. A sub-process is a closed process connected to the main process by an interface.

But before a list of founding using the ‘create subprocess’ function.

The new process lacks of a lane. It’s easy to create and you should do it to define a default actor.

The new process lacks of a start and end point. It’s working without but for consistency and a defined flow you should create them.

Every Task will be renamed to ‘Copy of ‘. That’s ugly.

The ned process is disconnected from the main process. This means different variables. You need to map the in and out variable mapping to transfer date between the processes. This will not be done automatically after creating the sub-process. But it will be done at creation time. But the mapping is not correct at all.
To fix the main-to-sub mapping change the mapping type from ‘Assigned to Contract Input’ to ‘Assigned to Data’.
To create the sub-to-main mapping use the ‘Auto map’ button.

Sub processes have a separated set of actors also. You need to map it separately.

It’s not possible to stop the main process inside the sub-process. Every ‘end’ will jump back into the main process execution. This could be a problem if fatal errors occur inside the sub-process.

To use the interface between main and sub-process you can use the variables mapping as described below. For every new variable you need extend the mapping. But you are free to use the sub-process in different situations and map it with multiple data sets.

More interesting is the possibility to send errors to the calling process. Use the ‘end error’ endpoint and define a error code. Add the ‘catch error’ event at the ‘call activity’ and you can handle the error result of the sub-process. Important: No data will be transferred from sub to main process in case of an error.

You can use my example process to explore the behavior. Initial, Step1 to Step4 are the default flow. Steps 2 and 3 are part of the sub-process. Try setting of variable values thru the process. In Step3 you can choose the ‘Error’ button to provoke an error. Use it and you will recognize that the data will not be changed in the main process. Download!

Sub-processes are interesting to separate or to re-use parts of the main process. But the benefits are rare if the creation and maintenance process should be simple.