1. Introduction

Developing applications for the Java platform typically involves a number of steps that are needed to be performed before execution. At the least it is required to run a Java compiler that turns Java source code into executable byte code.

Coding environments like the Eclipse or IntelliJ Integrated Development Environments do a great job of hiding these operations to the user for simple applications. When applications become more complex though, modularization needs and team development approaches ask for more sophisticated tools to make sure that growing system complexity can be managed.

The traditional development approach for Java applications resembles that of Desktop applications. Apart from compiling source code into an executable representation, execution of additional operations, the build process, needs to be completed before execution.This includes resolution of module dependencies and packaging of generated as well as retrieved artifacts into some deployable file format that will be installed (deployed) in some execution environment. For Java applications the latter is either a standalone Java Virtual Machine (JVM) or an application server.

For Desktop applications that can assume very little about their execution environment it is a necessity to be completely self-contained and easily distributable in form of some bundle of files.

The situation for Intranet and Internet applications is different. A major part of the lifecycle of a business application installation consists of change: ongoing development, customization, extension, and repair.

All but the most trivial business applications are a composition of subsystems that make strong assumptions on the presence and behavior of other subsystems – all together forming a software system that offers a wide range of access methods, performs recurring background jobs, integrates with other systems in whole landscape of systems, and operates over a shared and evolving data asset.

Platforms that are heavily geared towards development and operation of business applications, such as SAP's ABAP environment or Oracle's PL/SQL platform therefore take a different view. Instead of mimicking the concept of a most generic operating system that runs largely independent, locally installed binary applications, the focus of these environments is to perform the functions of a large highly interconnected software system at scale, agnostic to the single machine, with as little local configuration as possible. Instead of being the end of a tool chain that is a mere executor of binary code in some undecipherable interplay, a centrally defined, customizable and extensible system definition in the form of source code and configuration that is executable without build process complexities by arbitrarily many machine nodes running the platform is crucial to managing software life cycle complexity at scale.

The z2-Environment brings these qualities to the world of Java applications. We call it the system-centric approach.

1.1. What is the z2-Environment

Practically speaking the z2-Environment is a Java-based runtime environment that knows how to update itself from source code and configurations stored in repositories of various technologies, including source control systems like Git and Subversion or just a plain old file system.

Z2 defines an extensible component and modularization model that, based on few basic paradigms and interfaces, allows to construct full-blown modular application systems.

The z2-Environment can be used to build Java EE Web applications as well as standalone Java applications without restricting the use of third-party libraries and popular frameworks like the Spring Framework, Hibernate/JPA, and many more.

Z2 is strictly implemented on Java. Versions before 2.4 require Java 6 and support Java 6 and Java 7 language levels. As of version 2.4, Z2 requires Java 8.

1.2. Online resources

Z2 can be installed and tried out in a matter of minutes. There are various how-tos and samples available that explain and demonstrate the use of Z2.

2. Understanding a Z2 Home

In order to access component repositories and to implement its component and modularization model at runtime, the elementary capabilities of Z2 need to be installed as a normal Java program in binary form. The most fundamental features of Z2 are implemented in the z2 core. It's source code and its simple build script, including instructions, can be found on the Wiki site.

We call a local installation of a z2-core a z2 Home.

In its basic form, the z2 core knows little more than running Java main programs from source in a modular context. In order to turn a z2 home into a node of a capable system we need to connect it to repositories defining further component types, libraries, and applications.

This is done by declaring component repository components as will be explained below. Choosing remote repositories will give you a system that is centrally defined and scales easily with consistent updates.

Choosing only local repositories will leads to system that can be maintained and distributed like a Scripting language application, albeit being implemented using Java.

2.1. Home install in 2 minutes

Generally you install a Z2 Home by checking out (Subversion) or cloning (Git) a Z2 core. When using Git it is best to create a shared root folder, say install, go into install and issue

git clone -b master http://git.z2-environment.net/z2-base.core

to have a Z2 Home at install/z2-base.core (of the development version in this case). In this case, we have a Z2_HOME of install/z2-base.core.

Similarly using Subversion you would have an install folder and in there issue the command line

2.2. Folder Structure of a Z2 Home

Applications and system extensions store data they want to preserve as file system data on the home installation. While this should be the exception, there is good use cases for that. By convention, software components should create namespaced or otherwise sufficiently unique subfolders.

The only really important convention around the data folder is that it should not be deleted without thinking about the potential data loss.

work:

The work folder holds temporary data that may be deleted when the home process is stopped. For example, the component repositories maintain locally cached resources here.

Stopping the home process, removing the work folder, and starting the home process again should not change the system behavior nor application behavior except for some extended start up time due to cache misses.

run:

The run folder contains all the binaries to start and bootstrap the home process. These binaries are not compiled on demand but must be compiled in an ordinary build process. However, in general there is no customizing or extensibility required on this level. The only exception to this rule is the configuration of the initial repositories (see Components and Component Repositories).

The reason to exist for these binaries and configurations is for bootstrapping (for example: The Java compiler has to be executable before anything else can be compiled).

The subfolder bin of the run folder contains various start scripts and configuration files:

runtime.properties

The properties stored in runtime.properties are loaded by the home process and all worker processes into the respective JVM system properties.

launch.properties

A z2 home can be started in different “modes”. This is a convenience feature to simplify the application of various Virtual Machine settings for the home process (many of which propagate to worker processes – including debug settings).

the home process will end up showing an input prompt – which is favorable to running as background process (e.g. using nohup or via an init script on Linux). And of course, if you run

./go.sh -mode:debug - -np

you will get both.

The general syntax is

./go.sh <parameters for the launcher> - <parameters for the home process>

where the launcher is the small program that computes the actual Java command line as indicated in the previous section.

When running the z2 Environment locally, in particular during application development, it is convenient to have a graphical user interface (GUI). Adding the gui option achieves just that. For example

./go.sh -mode:debug - -gui

starts the home process with a Java GUI that allows to scroll through the home and worker processes console output and to manage synchronizations as well as the current list of worker processes.

The gui shell command is a shortcut that spares you the guioption. I.e

./gui.sh

is a short version of ./go.sh - -gui.

2.4. The Base Repository

The Z2 core that gets cloned (or checked out) to create a Z2 Home contains exactly what is needed to be able to bootstrap a running environment. All further definitions, code, and configuration is retrieved via additional component repositories that are typically accessed remotely.

The starting point, from the perspective of the Z2 core is the so-called Base Repository.

The Base Repository is defined in Z2_HOME/run/local/com.zfabrik.boot.config/baseRepository.properties. By default the Base Repository points to the z2-base.base repository hosted on z2-environment.net.

2.5. Worker Processes

As indicated above the z2-Environment can manage further JVM processes to better support heterogeneous load scenarios without compromising the ability to apply updates consistently nor the stability of the home process

Worker processes are, from a home process perspective, regular z2 components. See also the documentation of the component type com.zfabrik.worker below. What makes worker processes differ is their virtual machine configurations and the set of target states to attain (see com.zfabrik.systemState below). Target states on the other hand group components, e.g. web applications to be started and kept alive.

Worker processes are typically loaded when maintaining a home layout (see below for com.zfabrik.homeLayout). A home layout is simply a list of worker process component names that identify the worker processes to be started when (re-) loading the home layout

The usefulness of home layouts becomes clear when understanding that the home process always loads the home layout specified by the system property com.zfabrik.home.layout. This way, completely different worker process combinations combinations from the very same shared configuration store can be achieved by starting home processes with different values of the system property com.zfabrik.home.layout.

3. Anatomy

3.1. Working on-demand

Much of the goodness of the Z2-Environment comes from the fact that it has a pervasive on-demand architecture. That means that whenever the runtime binds resources it is for a clear and understandable reason, either because a need (a dependency) has been declared or the specific task at hand requires so.

While that sounds trivial, it is not so. Unlike implicit “start of everything deployed” or “everything in a list” approach that many application servers implement, binding of runtime resources from the potentially large pool of components available in a component repository (see below) happens strictly as required based on target states configuration – which eventually translates to simplified on-demand operations of large scale out scenarios with heterogeneous node assignments.

The sibling of the load-on-demand approach is the unload-on-invalidity approach. When repository definitions have been updated, in development but also in production scenarios, Z2 runtimes can adapt to the changes made. That requires to understand what component definitions have become invalid and to “unload” these from memory. Because of the modular nature of components and the heavy re-use of resource, invalidation of one component typically implies that others, dependant components have implicitly become invalid as well.

For example, a change in an API defined in some Java component may imply that web applications have to be restarted.

The abstraction for resources that have dependent resources is the Resource Management system of Z2.

3.1.1. The Resource Management System

The Resource Management system is at the heart of the Z2 runtime. Essentially anything that binds runtime memory or represents components is internally modeled as extensions of the Resource class (see Resource).

Resources represent any kind of abstraction that may be made available for some time and that may have dependencies onto other abstract resources, such as cache regions, applications, etc. In particular z2 components are resources.

Resources are provided by Resource Providers that establish a namespace of resources. One of which is the components resource provider that uses the component factory mechanism to delegate resource construction further.

A resource can be asked for objects implementing or extending any given Java type using the IResourceHandle interface. For components, the IComponentsLookup.lookup method is simply a delegating facade to that.

A complete description of the resource ranagement system is beyond the scope of this section. Please see the documentation of the com.zfabrik.resources packages in the core API Javadocs.

3.1.2. Embedded runtime and the Main Runner

The Z2 environment can be used as a multi-process server environment, which is what we looked at above, or embedded.

Running it embedded simply means to initialize the resource management system and component system from within another JVM process.

This execution mode can be handy for various purposes:

You can use it to run “Main” programs that are defined in some component repository from the command line w/o worrying about local build environments (and dependency resolution)

Sometimes you have no control over the execution mode because your code has been started by some other infrastructure. This is for example true for Hadoop Map-Reduce jobs. In that case the Hadoop Map-Reduce implementation starts tasks from a simple JAR file on some machine. Using the embedded mode we can execute Map-Reduce jobs defined in component repositories, without complicated job assembly into a hadoop job jar.

To facilitate the embedded mode, the Z2 home provides the z_embedded.jar in Z2_HOME/run/bin.

Pre-requisite to using Z2 in an embedded way is to have a Z2 home installation in file system reach. That home installation will be used to cache component repository content and binaries – i.e. it is essential to actually implement Z2.

Retrieves the binaries of the Java component com.zfabrik.dev.util/java into the folder test. See also com.zfabrik.dev.util.

for the command line and more details. The other way of embedded execution is via the ProcessRunner class (in the core API).

3.2. Components and Component Repositories

Everything you ever touch that the z2 Environment is supposed to understand is organized in Components. Z2 is built around the concept of named components that are defined in a well-defined repository structure. The level of understanding of resources that are used to implement some functionality in z2 is essential so that z2 understands when resources have been modified and corresponding runtime objects have become invalid and so that z2 is extensible by new semantics, that is new types of components.

More specifically the term Component translates in z2 to runtime objects that implement semantics according to a Component Type, have a well-defined, location-derived name, and are declared by a set of properties and optionally any kind of file type resources – e.g. holding the files of a Web Application.

Even more specifically, most existing Component Repositories implement the following folder structures that define components as shown in the right column:

…

<module>/<local>.properties

...

Defines component <module>/<local> of type of value of the property com.zfabrik.component.type as set in the property file <local>.properties.

...

<module>/<sub>/

z.properties

<file/folder>

<file/folder>

…

...

Defines component <module>/<sub> of type of value of the property com.zfabrik.component.type as set in the property file z.properties.

The component has furthermore all resources defined in all files and folders under <sub>.

These can be accessed using IComponentsManager.INSTANCE.retrieve(<folder>/<sub>)

The module taxonomy has no strict technical meaning to Z2. There is no internal Object representing a module. Via conventions however the module (as being the path without its last segment) has a rather prominent function:

Typically the module matches the development project granularity

The resolution of Java resources for a component defaults to <module>/java (see Java components).

Component repositories define the reality for the z2 environment. So it is important to understand this concept to understand z2.

Component repositories are, of course, declared as components itself. Consequently, component repositories may hold further definitions of component repositories – potentially leading to some reality distortion (aka Bootstrapping) issues – in the rare case you do advanced repository wiring.

When the z2 Environment starts up it has a hard coded knowledge of the Local Repository that is stored in Z2_HOME/run/local (see also above). It is the root of the Z2 component universe from a Z2 Home perspective.

The following diagram is an overview over the few entities that really are the heart of Z2 core – from component repository to resource via synchronization:

The notable exception to the repository structure above is Maven component repositories and – to some extent – the Hub Component Repository. The former derives Java components from Maven repository artifacts. That is, while the underlying structure is completely different, the repository implementation presents it in a Z2 compliant way (see Maven component repositories).

The Hub Component Repository turns a Z2 system into a repository for another Z2 system with the purpose of reducing bandwidth requirements for the original repositories or to not send source code over the wire (see the Hub Component Repository.

3.2.1. Synchronization with updates

At times, frequently when you are developing and less frequently in production, you want your runtimes to get up to date with respect to repository contents. That process is called Synchronization. The ability to synchronize with repositories is a particular capability of the z2-Environment and responsible for much of its goodness.

The synchronization process happens in three phases: At first, in the pre-invalidation phase, all component repositories (actually all “synchronizers”, but component repositories are generally connected to synchronizers. See also ISynchronizer) are asked to check whether there are updates available and what components (by name) will be affected. In the simplest case, the file system stored component repository, the check will examine folders to find out whether files have changed since the last time it was asked to check.

When that phase has completed, all components that have been identified to be subject of updates will be invalidated. Invalidation is a concept of the Resource Management systerm underlying z2. Loosely speaking it means that a component is asked to let go of all state but its name. Anything that is dependent on repository content or other components it depended on is to be dropped.

In the completion phase of the synchronization, synchronizers are asked to make sure that at the end of the completion phase the runtime has attained operational modes again. That is maybe the most interesting phase, as actions to that end may greatly vary.

For example, the home synchronizer (com.zfabrik.boot.main/homeSynchronizer) will simply try to attain the home_up state again.

The worker synchronizer (com.zfabrik.workers/workerSynchronizer) will send all invalidations to the worker processes and then ask them to attain their target states again.

Note that synchronizers have a priority and are called in a defined order. So that the worker synchronizer is called before the home synchronizer. As worker processes may have been invalidated in the second phase, it would be unreasonable to first bring them up again (home synchronizer) just to tell them about invalidations once more.

3.2.2. File system support

The simplest of all built-in component repositories is the file system component repository. All that is required is a file system folder holding component declarations and component resources in a structure as described above. As laid out below, always make sure the repository is started early on by declaring a participation in the system state com.zfabrik.boot.main/sysrepo_up.

To avoid problematic licenses, the z2 Environment does unfurtunately not come with the complete built-in Subversion connectivity. Additional configuration steps are required once to complete subservion enabling on your side, as described in the Subversion How-To.

As noted above, it is important to make sure your repository participates in the system state com.zfabrik.boot.main/sysrepo_up, i.e. you should add the line

to the repository declaration. The URL of the repository should point to a repository folder structure as outlined in Components and Component Repositories. For example, the Base Repository of the z2@base distribution has the URL:

http://z2-environment.net/svn/z2-environment/trunk/z2-base.base

3.2.4. Git support

The GIT version control system (VCS) is an implementation of a distributed version control system (DVCS). As opposed to centralized VCS, such as Subversion below, in a DVCS users hold a copy (called a clone) of the repository content on their local environment, typically the local disk, and can execute all typical modification operations, such as adding files, committing changes, to the Local Repository before sending updates back to a remote repository or retrieving updates from a remote repository.

Currently all framework development for Z2 happens in Git. All results are however available from Subversion repositories as well.

From a Z2 perspective a DVCS has the advantage of giving a slightly easier way of getting your own local repository that is fully under your control. Also moving changes between systems has a built-in solution. On the downside, you pay by distributing complete copies of your system's repository which may turn into a problem once repositories get significantly bigger than what is actually needed for the given scenario. That's why there is an implied tendency for more and smaller repositories when using Git und fewer but larger repositories when using Subversion.

In order to add a subversion component repository, declare a component of type com.zfabrik.gitcr as described in GIT component repositories .

As noted before, it is important to make sure your repository participates in the system state com.zfabrik.boot.main/sysrepo_up, i.e. you should add the line

3.2.5. Maven Repository Support

Maven repositories such as Maven Central are a huge source of open-source libraries made available directly by the copyright holder. Maven repositories can be integrated into a Z2 system as component repositories. In fact, most of the samples accessible from the Z2 Wiki as well as prominent add-ons such as the Hibernate and Spring add-on make use of this approach.

The main idea is that based on some root artifacts and some maven remote repository configuration, jar artifacts and dependencies will be made available as Java component in Z2 that can be referenced or included as suits best.

Artefacts in Maven repos have a fully qualified name of the form

<groupId>:<artifactId>:<version>

or

<groupId>:<artifactId>:<packaging>:<version>

By default, a jar artifact <groupId>:<artifactId>:<version> will result into a Java component of name

<groupId>:<artifactId>/java

As usual with Maven, if resolution root artifacts and dependencies lead to artifacts of the same packaging, group id, and artifact id but with different versions, a conflict resolution takes place(the higher version number will be used).

By default, all non-optional compile scope dependencies will be resolved. The resulting Java component will have the target artifact as API library and all non-optional compile scope dependencies as public references in their mapped form.

The z2 core will use lazy component class loaders to make sure that use of include libraries has virtually no runtime penalty.

An example configuration of a component repository from a Maven artifact repository may look like this:

This configuration would imply that the listed roots and all non-optional compile time references would be added as Java components with mapped references as described above. As an exception however, the artifact org.jboss.spec.javax.transaction:jboss-transaction-api_1.2_spec would be excluded. Furthermore, the artifact commons-logging:commons-logging would exclusively be used in version 1.1.2. Note: Those modifications of the dependency graph resolution correspond to similar Maven configurations (notably as in <exclusions> and <dependencyManagement>).

At times, it is useful to not have all required dependency roots in one component declaration but rather allow some modularization-friendly spread out declaration of dependency roots within a system.

This is achieved by using Fragments of a Maven component repository.

A fragments adds to the dependency graph but does not define where dependencies are retrieved from, so that the requirement for artifacts can be expressed without wiring the actual system to a specific environment. This is how the standard add-ons express their requirements.

Note that having two sets of roots combined and resolved is not equivalent to having to independent Maven Component Repository declaration as version conflict resolution (to the higher version) will always happen within the scope of one Maven component repository. In fact, in most cases having one Maven Component Repository will be the only manageable approach.

In order to add a fragment to a Maven Component Repository declare a component of type com.zfabrik.mvncr.fragment.

When running in Development mode, the repository will also provide the source (classifier) artifact if available, so that the Eclipsoid plugins will provide source code attachments to the development environment whenever possible during classpath resolution.

3.2.6. Component Types and Component Factories

Every component in Z2 has a type, declared via the component property com.zfabrik.component.type. As indicated above the component type identifies the semantics of a component, i.e. how to treat it and what you can do with it. For example a web application is of type com.zfabrik.ee.webapp. Being of that type implies an expected folder structure for the resources that belong to the web application. Also it implies the ability to be made available via a web server. The semantics of a Java component (of type com.zfabrik.java) is obviously completely different.

Component Factories are in charge of implementing the semantics of a component type. In short a whenever a component is requested via the resource management system, the component factory responsible for the respective component type is asked to create an implementation, more specifically a Resource (see The Resource Management System) that implements the actual component.

So, for example, the component factory for web applications knows how to interpret the folder structure of a web application component as the layout of a Java EE web application and how to register this web application with the Jetty web container. The component factory for Java components knows how to check whether code needs to be compiled and how and how to set up class loaders.

3.2.7. Java Naming and Directory Interface (JNDI) support

Components in the z2-Environment may be looked up via JNDI. The functionality is essentially equivalent to lookups via the IComponentsLookup interface.

When looking up a component, it is typically required to specify the expected return type. When using JNDI URLs this can be accomplished via a type query parameter. For example, when looking up a JDBC data source (see Data Source Components) that is declared in a component repository as the component mymodule/dataSource, the call

new InitialContext().lookup("components:mymodule/dataSource?type=javax.sql.DataSource");

and both calls return a (shared) data source instance.

3.3. Unit of Work and transaction management

The z2-Environment does not mandate any specific way of implementing transaction management.

It does however have a concept of a unit of work that is used by parts of its implementation and that is the underpinning of the simple, but rather useful, built-in Java Transaction API (JTA) implementation.

A unit of work is a well-defined part of the control flow on one thread of execution that resources such as database connections can bind to and learn about whether all work should be committed or rolled back at the end of it. The WorkUnit API that is part of the Z2 core APIs implements this abstraction.

All threads managed by the z2-Environment wrap their work using this API and when extending the z2-Environment with custom threading implementations, it is suggested that you wrap the actual work using the WorkUnit API, so that at least the z2 infrastructure can integrate cleanly and optimize resource usage.

The JTA implementation provided in the module com.zfabrik.jta provides a standard UserTransaction implementation that integrates with the WorkUnit API and thereby provides a robust transaction management abstraction that greatly simplifies integration with persistence tools like Hibernate JPA.

It can be looked up using the global JNDI name

components:com.zfabrik.jta/userTransaction

Note that com.zfabrik.jta is not a full-blown transaction manager that supports distributed transactions and corresponding protocols. It is fine for typical non-distributed transaction situations however.

In conjunction with the z2 provided database connection pooling (see ZFabrikPoolingDataSource) it is important to note that, if you choose work unit enlisting, then the WorkUnit abstraction defines transaction boundaries, so that automatically all database connections are enlisted with the current unit of work and committed or rolled back under control of the WorkUnit implementation.

In terms of the JTA implementation, this behaves as if there is already a transaction open on the thread.

The WorkUnit API supports nesting and suspending of units of work. With the JTA implementation this corresponds to nested and isolated transactions.

Please visit the Wiki page on transaction handling in Z2 to learn more about alternatives and how to integrate with a full-fletched transaction manager.

4. Constructing a Z2 system

This section describes how Z2 systems are assembled from repositories and how to construct your own system from what is provided on z2-Environment.net and your own repositories.

From a Z2 Core perspective it all starts with the Local Repository that is part of the core. In there, we define at least the Base Repository. The Base Repository typically points to some remote Git or Subversion based repository. Once Z2 has registered that repository, other repository may have appeared that will be registered as well which may lead to the appearance of yet more repositories and so on. Hence, in effect, we have a chain or tree of repositories with the Local Repository on its root as far as repositories contain declarations of repositories.

On the other hand, repositories have a priority (more on that below) that determine what repository has the right to say what a module contains.

Based on that mechanism you can construct a system definitions that consist of as few as one repository (if we do not count the core) or many repositories of which some are even shared between systems.

Before moving forward on that, let's have a look at the add-ons.

4.1. Add-ons

Add-ons add more functionality to Z2. Generally speaking, an add-on is a regular Git or Subversion repository that holds one or more modules and is incorporated into a z2-Environment defined system via a component repository declaration (see Components and Component Repositories).

In other words, technically there is nothing particular about add-ons. It is the way they are used that is noteworthy. The idea is that you can pick the add-ons you need and add them on top of z2-base. Previously Z2 was available in distributions. Now you take z2-base and add what you need on top.

Add-ons provided on z2-environment.net are versioned just like z2-base, so that there is no complicated version vector. Also add-ons have some documentation in the z2-Environment Wiki and come with some samples.

4.2. Building a system on z2-base and add-ons

It is a best practice to encapsulate all system-specifc configuration in a module called environment. Also the repository z2-base.base contains an environment module. The environment module is one of the preferred places to add a repository definition. Another useful place is the Local Repository.

We recommend a setup where the z2-base repositories hosted z2-environment.net can be incorporated as-is, modulo copying or cloning, that has zero or more add-ons from z2-environment.net or from you, and that has one scenario repository that assembles everything.

Such a setup consists of a z2-base.core (or a clone or a copy – in all cases from here on) and a z2-base.base plus zero or more add-ons, and one of possibly many scenario repositories.

The Local Repository, in the core, has the Base Repository pointing to z2-base.base and a Scenario Repository, next to the Base Repository (z2-base.core contains a template for that) that points to the scenario repository.

The scenario repository has an environment module that has definitions for all other add-ons used in the scenario.

And finally, the Scenario Repository it has a higher priority than the Base Repository, so that it overrides the default environment definitions.

Here is a schematic overview:

Looks complicated. But in reality, it's really just filling the scenario component repository definition. The samples in z2-environment.net work almost exactly the same way. The difference is that we do not use the Scenario Repository but rather the Development Repository – which is the topic of the next chapter.

The effects of this setup are:

The z2-base.core always stays connected with z2-base.base

This is somewhat crucial. There is no meaningful way of using Z2 without a base repository like z2-base.base.

You can run many scenarios with shared add-ons

By only changing the scenario repository definition you can control a complete application system definition that is implemented on a Z2 Home - while still sharing all Add-on and base repositories. The environment module in the scenario repository defines what is part of the scenario.

You can leave repositories from z2-environment.net untouched

As you only re-use, typically you will not need to modify repository content unless you apply bug fixes and the like. That separates cleanly what is yours, what is from others.

5. Developing with the z2-Environment

So far we have learned about the principles behind the z2-Environment and how to configure and run it. This section is devoted to the development using Z2.

In principle you would not need any tool support. You could simply check out files from your favorite repository, use some text editor or your favorite integrated development environment to add projects and files or to modify files as you wish, commit your changes and synchronize the runtime that would do whatever else is needed.

While that is good news already, there are some simple tools that make your life still easier and give you a development experience you have probably not experienced before in Java environments.

The whole approach to local development using the z2-Environment is currently based on two tools:

The Development Repository – a component repository implementation that allows you to selectively and quickly test modifications

The Eclipsoid Eclipse plugin. A plugin to the popular Eclipse development environment that resolves project dependencies from a running z2-Environment.

5.1. A note on what JDK to use

As of version 2.4, Z2 runs requires Java 8. Previous versions supported Java 6 and Java 7. As Z2 compiles Java code, it has to make a decision on what language level to compile for. Typically however you do not need to worry about this, as by default, Z2 simply sticks to the version of the Java Development Kit (or Java Runtime Environment) it is currently executed with.

That is, if you run a Java 8 JDK, then Z2 will compile for Java 8.

You can however enforce a language level to compile with using the system property com.zfabrik.java.level. Valid values are “6”,”7”, or “8” for Java 6, Java 7, or Java 8 respectively.

5.2. Workspace development using the Dev Repository

The Development Repository (or short Dev Repo), works by checking a file system folder for project subfolders that contain a file called LOCAL and scans for components inside.

The Dev Repo has a high priority within the chain of component repositories. That means that whatever it finds, it will typically win against definitions provided from other component repositories.

By default, the Dev Repo is configured to look for changes in subfolders (two levels deep) of the folder that contains the core installation. That is the reason behind the folder structure described in the next section:

When using subversion and checking out the Z2 core into your Eclipse workspace the Dev Repo will find your projects. When using Git and importing projects from a working tree of a repository that is next to the Z2 core, the Dev Repo will find your projects as well.

This is how it all ties together: Given the Dev Repo is able to find your project, you simply put file called LOCAL into the project's root folder and the project and all its components will be picked up with preference by the Dev Repo next time you trigger a synchronization.

That sounded a little complex, but as you will see next, together with the Eclipsoid tool it all rounds up nicely.

Before going there it is noteworthy that the Development Component Repository has use cases beyond development. Sometimes it handy to override centrally defined components, for example to modify web server ports or data source configurations, via the Dev Repo.

By modifying the system property com.zfabrik.dev.local.workspace you can influence where the Dev Repo scans for components.

5.2.1. Recommended folder structure

Using Git or Subversion makes no difference in the non-development folder layout of a Z2 installation. In development however there is a small but noticable difference.

In a Subversion setup, the folder that holds the Z2 core check out is also used as development workspace. That is, you will have a workspace folder, say workspace and in that workspace the Z2 core checkout as well as other projects, typically correponding to Z2 modules as far as Z2 is concerned. E.g.:

workspace/z2-base.core
workspace/com.acme.some.project
...

In a Git setup, the development workspace folder is a folder next to the clone of the Z2 core repository. Assuming you installed into install and the workspace folder is called workspace your structure would look like this:

Note: In both cases, the search path for “armed” project is the same for the Development Repository (see above).

5.3. The Eclipsoid Plugin

The Eclipsoid plugin for the Eclipse development environment comes with the z2-base system and can be installed from the local update site at

http://localhost:8080/eclipsoid/update/site.xml

Alternatively, you can install it from the z2 environment server at

http://www.z2-environment.net/eclipsoid/update/site.xml

This plugin provides a number of useful utilities for working with Z2. The most important functions are:

Trigger synchronization of the running z2-Environment from the IDE (Sync)

Download of dependencies as .jar files from a running z2-Environment (Resolve)

The Eclipsoid fixes an important problem in development of larger software systems in IDEs like Eclipse: Larger software systems consist of many different projects that have compilation dependencies between each other. That is, Java code in one project may not compile without having access to Java types defined in another project.

IDE's like Eclipse support local compilation of the code you are working on and show compilation problems early on. To do so however, the project's dependencies need to be resolvable. That is exactly what the Eclipsoid does: Upon Resolve, any Eclipse Java project that is recognized as a Z2 project will be introspected for Java components (see Java Components), and if found, their references will be resolved from the server-side and required Java definitions will be downloaded and provided to the project.

Technically, a Z2 project is any Java project that has the Z2 Classpath Container (called "ZFabrik.Eclipsoid") in its classpath (i.e. the “.classpath” file). You do not need to fix that by yourself though. Either by creating a project as Z2 project or by “Transform into Z2-project” you can let the plugin do that for you.

That means: In order to work on a single project, from a possibly large solution, you check out that single project, invoke Resolve, and, from there on, modify and Sync repeatedly to test your modifications. When you are done you commit your changes and disarm projects again to make sure the integrated content is effective.

The last step is actually somewhat depending on your setup. If your Z2 is hooked up with remote Git repositories, you may need to push changes to remote before.

Sync and Resolve can be invoked by pressing Alt+Y or Alt+R respectively or by clicking on the Z2 toolbar buttons.

Finally, the Eclipsoid can arm and disarm projects: Arming a project means to put an empty LOCAL file into it, and disarming means to remove that file again.

See the previous section for more details on the Dev Repo. Armed projects are shown with a green halo around the Z decoration in the project view.

5.4. In-container unit testing using Z2 Unit

The z2Unit feature, integrated with z2-base, allows to run in-container tests in Z2 from anywhere where you can run JUnit tests. To learn more about the JUnit testing framework, please visit www.junit.org.

In-container tests are ordinary unit tests that run within the server environment. Standard JUnit tests run in an environment that often has little to do with the tested code's “native” environment, other than seeing the same types. Anything else, data base connectivity, domain test data, component and naming abstractions (e.g. JNDI) need to be firstly abstracted out from the tested code and secondly “mocked”, that is simulated one way or the other.

In small systems and for tests that have little environment dependencies that is well-manageable. In larger scenarios and for higher-level components this becomes unreasonable and assuring the correctness of the mocked up environment becomes a testing issue on its own.

The foundation of the z2Unit feature is a JUnit Test Runner implementation that delegates the actual test execution to a z2 server runtime. That is, although JUnit believes to run a locally defined test class, the actual test runs on a remote server running the methods and reporting results corresponding to the structure of the local test implementation (which indeeds matches its server-side equivalent).

Please visit the How_to_z2Unit Wiki page on z2Unit to learn how to practically use z2Unit.

If you want to automate out-of-container tests and cannot rely on the Eclipsoid to have a suitable class path, you should use the com.zfabrik.dev.util/jarRetriever tool to retrieve all required dependencies as described next.

5.5. Retrieving jars from Z2

In most everyday operations you do not need to think about binary build results when using the Z2 environment. Sometimes however, in particular when running or inspecting code outside of Z2 you it is required to have compiled binaries at hand.

Using the com.zfabrik.dev.util/jarRetriever tool you can request binaries of a set of Java components including dependencies. This tool is an example of a Main program running using an embedded Z2 environment. That is, in order to run it you do not need a Z2 server running. You do however need a Z2 home installation.

The installation folder of the Z2 home that is being used to load the jar files from

6. Enhanced Basis Features

6.1. The Binary Hub Component Repository

In some cases, it is not desired to have the Z2 runtime access source code repositories directly, for example so that no source code is ever stored on production machines. Other reasons may be to remove the load of compilation from production nodes.

The Hub Component repository addresses this problem by providing the following pieces:

A providing side, that serves all modules and component available to the system in a pre-compiled form (as far as compilable code is involved)

A client side that connects to the providing side

So, instead of connecting to an original source of components, the Hub Component Repository enables an operational approach where some Z2 runtimes see all system content in pre-compiled form only.

6.2. The Gateway for Zero-Downtime-Upgrades

The Gateway module implements a "zero-downtime-upgrade" feature in Z2. Specifically, it uses the worker process management of Z2 in conjunction with an intermediate reverse proxy style Web handler to implement the following feature:

Upgrading a stateful Web application, i.e. a Web application that stores user data in its HTTP session typically implies downtime, and if the session state is not serializable and persisted during the upgrade, it does additionally imply that user state gets lost and typically that users need to log on again.

Using the Gateway, running sessions may be preserved and worker resources may still be assigned on the current software revision for as long as there are running sessions during a node upgrade and until all sessions have been terminated. The typical application of this feature is to roll out functional and user interface corrections without interrupting users. Users can switch over to post-upgrade software by terminating their session (e.g. via a log out) and starting a new one (e.g. by logging in again).

The approach behind the Gateway feature is simple:

Allow separation of user sessions across worker processes

Provide an entry point to Web Applications that is capable of identifying what worker process is serving an associated session and of routing to that worker process.

Enhance worker process management with the capability of identifying stale worker processes that will not serve any user request in the future.

7.1. Core component properties

Components in a Z2 component repository are declared using a set of properties, name-value pairs, that state the essential characteristics (beyond the name) of a component.

Typically, the component type (also a property, see below) defines the set of properties that make sense declaring. Some components however look for declarations in other components. As an example visit the System State component type below.

Very few properties are built-in with the z2 core and apply to any component:

name

values

com.zfabrik.component.type

The type of the component. The value of this property determines the Component Factory that implements the semantics of the component.

com.zfabrik.component.dependencies

A comma-separated list of component names. Components listed should implement IDependencyComponent. That interface will be invoked before providing from the declaring component and the declaring component will depend on all listed components.

Component dependencies allow to make sure that other components may be “prepared” before some particular component becomes used. This can be handy when some functionality of your solution depends on a side-effect established by another component. For example a web application may depend on a successful database migration check or another web application.

In order to pre prepared a component implementation needs to provide implementations IDependencyComponent, as for example Web Apps do.

Note that other component types, such as System States may define properties that apply to yet other components.

7.2. “Any” components (core)

Any components may represent, as the name tries to indicate any sort of interface or aspect. In short, implementations of any components simply extend the Resource Management resource base class Resource (JAVADOC) pointer.

Typically, “any” components are only useful, if you need to satisfy some generic interfaces like IdependencyComponent but there is no more narrowly defined semantic provided in the form of a component factory.

That said, unless you have a problem that may demand an “any” component, you do not need to worry about them.

Properties of an “Any” Component:

name

values

com.zfabrik.component.type

com.zfabrik.any

component.className

Name of the class that implements com.zfabrik.resources.provider.Resource

7.3. Component Factories (core)

In general a component factory implementation is an implementation of the interface com.zfabrik.components.provider.IComponentFactory. When called, it is asked to return an extension of com.zfabrik.resources.provider.Resource that represents all runtime aspects of the component of the passed-on name.

As a short cut, the class name given by the property component.className in the component's descriptor may name a class that extends com.zfabrik.resources.provider.Resource rather than implementing the factory interface above.

In that case, the extension class must have a constructer that takes a single String parameter and it will be instantiated for a given component by its name when required (i.e. when otherwise the factory interface would have been called.

Only one component factory per type name may be declared.

Properties of a Component Factory Component:

name

values

com.zfabrik.component.type

com.zfabrik.componentFactory

component.className

Name of a class that implements com.zfabrik.components.provider.IComponentFactoryor name of a class that extends com.zfabrik.resources.provider.Resource.See also above.

componentFactory.type

Name of the component type implemented by this factory. Components that declare to be of this type are managed by resources provided by this type.

7.4. Data Source components (z2-base)

Data source components allow to manage JDBC data sources as z2 components. When present, the built-in support for JNDI lookups (see Java Naming and Directory Interface (JNDI) support of the z2-Environment can be used make these datasource accessible typical Java frameworks such as Java persistence providers, or they may be used directly.

The benefits of specifying JDBC data sources as z2 Components lies in the simple maintenance of their configuration. You are in no way limited to using this component type, when you need a data source. At times it may be more suitable to leave your data source configuration for example in a Spring application context and expose it as a bean to make it re-usable across modules.

Data source configuration is split into two parts: General configuration and Data Source implementation specific configuration.

General Properties of a Datasource Component:

name

values

com.zfabrik.component.type

javax.sql.DataSource

ds.type

The type of data source used. Supported values are NativeDataSource or ZFabrikPoolingDataSource. See below.

ds.enlist

The data source may be enlisted with the WorkUnit. The WorkUnit API provides a simple way to attach shared resources on the current thread of execution for the time of a unit of work (typically a web request, some batch job execution) as implied by thread usage (see ApplicationThreadPool).

Supported values are none and workUnit. Default value is workUnit.

ds.dataSourceClass

If set, the specified data source class will be loaded as data source implementation using the private class loader of the Java module of the component holding the data source definition. When specifying this class in conjunction with the using ZFabrikPoolingDataSource as type, configuration properties will be applied to both and the pool will request new connections from the specified data source. Alternatively, the pool may be configured

7.4.1. Data Source Specific Configuration

When specifying a data sourc class but also when using the built-in pooling data source, properties of the data source implementation class can be specified as Java Beans properties using the syntax below:

ds.propType.<prop name>

Type of the property. Can be int, string, or boolean. Default value is string.

ds.prop.<prop name>

Value of the data source property to be set according to its type setting above.

7.4.2. Data Source Types

Currently the Data Source support allows to specify two different types of data sources:

NativeDataSource

When declaring a native data source, the ds.dataSourceClass must be specified to name a data source implementation class.

All further configuration of the data source is done generically using the property scheme below,

ZFabrikPoolingDataSource

When declaring a ZFabrikPoolingDataSource a z2 provided data base connection pool implementation will be used that has the following configuration properties:

Name

Type

Value

driverClass

string

Name of the actual JDBC Driver implementation class. E.g. com.mysql.jdbc.Driver for MySQL.

url

string

JDBC connection url

user

string

User name for authentication at the data base.

password

string

Password for authentication at the data base.

maxInUseConnections

int

Maximal number of connections handed out by this pool. This number may be used to limit database concurrency for applications. Requesting threads are forced to wait for freed connections if this limit has been exhausted. Make sure threads are not synchronized on shared resources when requesting connections and when this limit is less than your theoretical application concurrency as this may lead to thread starvation.

maxSpareConnections

int

Number of connection held although not currently used by the applications.

connectionExpiration

int

Connections will be closed after this number of milliseconds has expired since creation and when returned to the pool. This setting can be used to make sure stale connections get evicted although not detected otherwise by the pool.

connectionMaxUse

int

Connections will be closed after this number of times they have been handed out from the pool and when returned to the pool. This setting can be used to make sure connections only serve a limited number of requests.

7.5. File system component repositories (core)

File system based repositories are the most straightforward repositories. All that is required is a file system folder that holds components and component resources in a structure as described in Components and Component Repositories. As always for component repositories, it is important to make sure they are started early on in the life-cylce of a z2 runtime.

Note that unlike the development repository (see Developing with the z2-Environment), the file system repository is not robust under modifications: Resources in the folder structure of the file system repository will be accessed at any time the z2 runtime requires to – which may be significantly later than the latest synchronization that decided about invalidations due to changes. Resources should not be modified in the meantime to assure consistency.

Properties of a File System Component Repository Component:

name

values

com.zfabrik.component.type

com.zfabrik.fscr

fscr.folder

Store folder, i.e. the file system folder that holds the actual resources to run the repository over..

fscr.checkDepth

Component folder traversal depth when determining the latest time stamp. Set to less than zero for infinite depth. Default is -1.

7.6. GIT component repositories (core)

When using a Git based component repository, the z2 runtime manages a local clone of another Git repository. This allows to declare Git component repositories that refer to a local Git repository, typically present in a development setup, or to remote Git repositories, used for production setups.

In both cases, when synchronizing, the component repository will pull updates from the configured repository and check for modifications by inspecting the local workspace, i.e. the Git workspace maintained by the z2 environment runtime itself.

Note: When specifying a local repository in gitcr.uri the relevant branch is still the one configured in the component properties (see below), not the checked out branch of the local repository.

You can use home layouts to define a static OS process layout of all z2 home runtimes of your system as well as you can use home layouts to have heterogeneous cluster layouts, that is, a setup where many z2 home installations share one system definition but run different sets of worker process configurations.

At one point in time, a home process will only maintain one home layout. To specify the home layout to use, use the system property com.zfabrik.home.layout and set it to the name of the particular home layout component – typically in a mode line of the launch.properties file (as described in Folder Structure of a Z2 Home)

When the Home Layout component is loaded, it will try to load the worker processes specified and depend on them subsequently.

Triggering (re-) compilation of source code in a Java component using the compiler API

The mechanisms around references and includes between Java components and the separation of Java components into public (api), private (impl), and test, are the underpinnings of the software modularization features of z2, which is why we discuss these in some depth here.

7.8.1. Classloaders

The class loader concept of the Java platform provides a powerful name spacing mechanism on the type system. While in the beginning that seems to be of little concern, in more complex scenarios isolation within the type system in conjunction of sharing of types between modules of a solution becomes the catalyst of successful modularization.

Isolation means that modules on the platform may use types without sharing them, that is without making them visible to other modules. That can be important for various reasons:

Implementation types should be hidden from potential users so that modifications do not break using modules.(encapsulation).

In particular third party libraries used in the implementation of a module may conflict with other versions of similar libraries so that exposing them would lead to unnecessary risks on the consuming side (multiple versions)

Sharing of types on the other hand allows to refer to the very same types from different modules and, as they are shared, provides an efficient type safe way of communicating state between modules:

By publishing an API, modules may expose services efficiently to other modules

Based on these mechanisms, modularization for Java components on Z2 provides the ability to maintain a system of named modules that have defined contracts among each other while still maintaining local integrity and cohesion.

The class loading system in Z2 is based on a ancestry-first, multi-ancestor scheme. Effectively, a Java component will have two class loaders at runtime. One for the API, one for the Implementation and both ask their ancestors (other class loaders) first before searching local resources.

The API class loader will have ancestors corresponding to all Java components identified by the public references. The implementation class loader will have the API class loader as ancestor and ancestors corresponding to all Java components identified by the private references of the Java component (see below).

7.8.2. Includes

Another important mechanism supported by the Z2 Environment is so called “Java includes”. The references feature described above allows sharing of types and class path resources without duplicating them at runtime.

There are cases however where duplication of types is necessary – although that is fortunately the exception:

Frameworks like the Spring Framework supply pre-compiled libraries that contain “adapters” for various other frameworks that may not be present on the using application. The late linking qualities of the Java VM supports unresolvable type references as long as they are not needed. In this case, the library must be used in the class loading name space of the using application to make sure it gets appropriate type visibility.

Some libraries attach information about the using application to the class loading namespace itself, e.g. via class variables. In that case, sharing types can easily lead to unpredictable behavior as state from different class loading name spaces may override each other.

The use of includes, actually in most cases only “private includes” implements exactly that. The Java resources of the included component get copied into the using Java component and hence are used as if provided by the using Java component.

The picture below shows a simplified example overview over the reference and the include mechanisms:

Properties of a Java Component:

name

values

com.zfabrik.component.type

com.zfabrik.java

java.publicReferences

Points to another java component whose public types will be shared with this one (and maybe others). Everything referenced as public reference will be visible to the public interface of the referencing component as well as to all referencing the referencing component. In other words: References are transitive. In particular, anything required to compile the public types of a Java component must be referenced via this reference property. Components may be specified as a comma-separated list. Component names that have no "/" will be defaulted by appending "/java".

java.publicIncludes

Points to com.zfabrik.filesor com.zfabrik.java components that must have a bin (or alternatively a bin.api, for Java components) folder that will be included into this java component's public java resources. The component may also have a src (or alternatively src.api, for Java components) folder that will be copied before compilation into src.api.

java.privateReferences

Points to another java component whose public types will be shared with this one (and maybe others) Nothing referenced as private reference will be automatically exposed to the public interface of the referencing component nor to other components. Anything needed to compile the private types of a Java component, must be referenced as a public reference, be part of the public types of that component, or be referenced via this reference property. In other words: The private types automatically see the public types and transitively anything referenced publicly as described above. In addition, to use more types in the "private implementation section" of a Java component, types that will not be exposed to referencing components, use this reference property. Components may be specified as a comma-separated list. Component names that have no "/" will be defaulted by appending "/java".

java.privateIncludes

Points to com.zfabrik.files or com.zfabrik.java components that must have a bin (or alternatively a bin.api, for Java components) folder that will be included into this java component's private java resources. The component may also have a src (or alternatively src.api, for Java components) folder that will be copied before compilation into src.impl.

java.testReferences

Points to another java component whose public types will be shared with this one (and maybe others) if the execution mode, as defined by the system property (see Foundation.MODE} is set to development. Test references extend the private references. In conjunction with the tests source folder this allows to add test code and corresponding dependencies that will be ignored by the runtime unless running in development mode.

java.testIncludes

Points to com.zfabrik.files or com.zfabrik.java components that must have a bin (or alternatively a bin.api, for Java components) folder that will be included into this java component's test java resources. The component may also have a src (or alternatively src.api, for Java components) folder that will be copied before compilation into src.test.

java.compile.order

The compile order must be defined in java components that also contain non-java sources - e.g. scala. This property can be omitted for pure java components, otherwise one has to define all compilers in the right order - e.g: scala, java

7.9. JUL configurations (z2-base)

The standard logging implementationcontained in the package java.util.logging (or JUL for short) of the Java SE distribution can be configured using components of type java.util.logging.

The z2 Environment implementation uses JUL throughout (rather than log4j or other logging mechanisms). Defining java.util.logging components provides an easy way to distribute log configurations without the need to modify command lines and without need to restart the runtime,

Components of type java.util.logging are expected to provide a file called logging.properties in their resources (see for example the component environment/logging in z2_base/base). That file will be applied using LogManager.getLogManager().readConfiguration(...) every time the component is prepared (as in IDependencyComponent, i.e. as part of a dependency resolution), e.g. when (re-) attaining a participated system state.

Properties of a JUL Configuration Component:

name

values

com.zfabrik.component.type

java.util.logging

7.10. Log4J configurations (z2-base)

Components of type org.apache.log4j.configuration are handled exactly as components of type java.util.logging (see right above), except that a file called log4j.properties is expected and loaded using Log4J's PropertyConfigurator API (see the Log4J documentation for the specifics of Log4J configuration).

Properties of a Log4J Configuration Component:

name

values

com.zfabrik.component.type

org.apache.log4j.configuration

7.11. Maven component repositories (z2-base)

Maven component repositories allow to integrate artifacts from Maven artifact repositories without copying them into your system. See Maven Repository Supportfor more details.

Properties of a Maven Component Repository Component:

name

values

com.zfabrik.component.type

com.zfabrik.mvncr

mvncr.settings

Specifies the location of the settings XML file relative to the components resources. This is expected to be a standard Maven configuration file. Defaults to settings.xml

mvncr.roots

A comman-separated list of root artifacts

mvncr.managed

Fixed artifact versions, if encountered during recursive root resolution. This corresponds to a <dependencyManagement> section in a Maven POM file.

mvncr.excluded

A comma separated list of artifacts that will be skipped during resolution of any root

If set to true, the version part will not be removed from the java component name mapping and instead a versioned name is used. That is, in the case above, a java component org.springframework.security:spring-security-aspects:3.2.2.RELEASE/java would be mapped. This is useful if "non-default" versions are required.

scope

Any of RUNTIME, COMPILE, PROVIDED, SYSTEM, TEST. Corresponds to the corresponding Maven dependency scopes. If set, non-optional dependencies of the respective scope will be traversed to resolve dependencies.

By default, a Subversion component repository needs to be able to connect to the Subversion repository. In most cases this means that you need to be online when running a z2 environment, even when developing.

In reality however, the repository has all required resources in its local caches and only checks for updates. In development situations it can be very handy to have the repository simply go by what is on the caches and continue developing in an Eclipse workspace gladly ignoring possible central modifications.

To enable that you can set the system property

-Dcom.zfabrik.svncr.mode=relaxed

in launch.properties. In that case, failing to connect to a remote repository will be noted by a warning but otherwise ignored and the repository will try to satisfy component lookups from cached data.

To avoid problematic licenses, the z2 Environment does unfurtunately not come with the complete built-in Subversion connectivity. Additional configuration steps are required once to complete subservion enabling on your side, as described in the Subversion How-To.

7.14. System States (core)

System states are abstract target configurations for z2 processes. Systems can easily develop into a non-trivial set of web applications, batch jobs, web service interfaces and more that interplay with each other to implement solution scenarios. Take for example an e-commerce web site: There is the actual shop front-end but also report generation, mass-emailing, shop content administration, etc.

It is handy to group components that form parts of an overall scenario and that need to be initialized beforehand, such as web applications, or update scheduling for analytical data aggregation.

System states support dependency declarations in two directions:

Via the property com.zfabrik.systemStates.dependencies a comma-separated list of components that this state depends on and that must be prepared to have the state attained.

Via the property com.zfabrik.systemStates.participation a comma-separated list of other system state components may be specified that are to depend on this system state and that cannot be attained unless this system state has been prepared (which equals attained in this case). This property may be declared on components of any type.

State dependencies are evaluated in an eager way. That is, even if some dependency fails to prepare, and eventually the system state will not be attain, all other dependencies will still try to be prepared.

System state dependencies are similar to component dependencies (see Core component properties). There are subtle differences however in that component dependencies are evaluated much earlier in the life cycle of a component than state dependencies. In essence that means, that the system state implementation may never evaluate participations if its component dependencies fail. Hence, on system states, it is preferable to not use component dependencies but rather the declarations above.

In practice, system states serve as a convenient, named place holder for parts of an overall scenario.

Attaining a system state means to “prepare” (see IDependencyComponent) all dependent component and do so again if one got invalidated and the system is to be attained again.

The system state feature is used by z2 in several places:

All z2 processes (in server mode) have the target state com.zfabrik.boot.main/process_up

The home process has a hard-coded target state com.zfabrik.boot.main/home_up

Worker processes always have the target state com.zfabrik.boot.main/worker_up

Component repositories should participate in com.zfabrik.boot.main/sysrepo_up

Worker processes express their target configuration by a list of system states to attain (and keep attained). See below for worker process component configuration.

As mentioned above, in order to assign components as part of a system state, you can either list them as dependency components in the system state definition like this:

The z2 component descriptor that covers the remaining parts (such as the Web app's context path) and life cycle control inherent to z2.

Web applications have the following module structure in z2:

WebContent

Folder holding the standard Java Web application structure, such as the WEB-INF folder.

z.properties

Component descriptor of the Web application

Properties of a Web Application Component:

name

values

com.zfabrik.component.type

com.zfabrik.ee.webapp

webapp.path

Context path of the web application.

webapp.server

Component name of the web server to host this Web application.

webapp.requiredPaths

A comma-separated list of context paths this Web application relies on. This is an alternative way of defining a component dependency by Web app context path rather than by component name.

7.16. Web servers (z2-base)

In order to run Web Applications, arguably the most prominent reason to run an application server, z2 integrates the Jetty Web Container.

There is no particular reason other than that Jetty is a well-embeddable, well performing, standard compliant web container. Based on z2's extensible component model, the way Jetty has been integrated, Tomcat could be integrated as well. Contact us, if that is important for you.

The component type com.zfabrik.ee.webcontainer.jetty configures instances of Jetty web servers. While in most cases you will not operate more than one, the component still provides the place to hold Jetty configuration. As an example have a look at the environment/webServer component in z2_base/base.

Names a override web.xml file, that can be used to override web application configurations for all web applications. The file name is considered relative to the component's resource folder

jetty.default-web.xml

Names a default web.xml file, that defines web application defaults for all web applications. The file name is considered relative to the component's resource folder

7.17. Worker Processes (z2-base)

Worker processes are managed by the home process when running in server mode. Worker processes improve robustness of the z2 runtime as applications running in some worker process do not impact applications running in another worker process nor and in particular, do crashing worker processes impact the home process.

Properties of a Worker Processes Component:

General virtual machine parameters for the worker process. See the JVM documentation for details.

worker.process.vmOptions.<os name>

Override of the general VM options above for a specific operating system. Use the OS name returned by the RuntimeMXBean to replace <os name> with.

worker.states

Comma-separated list of target state (components) of the worker process (see also ISystemState). The worker process, when starting, will try to attain these target states and will try so again during each verification and synchronization.

worker.concurrency

Size of application thread pool (see ApplicationThreadPool). In general this thread pool is used for application type work (e.g. for web requests or parallel execution within the application). This property helps achieving a simple but effective concurrent load control

worker.process.timeouts.start

Timeout in milliseconds. This time out determines when the worker process implementation will forcibly kill the worker process if it has not reported startup completion until then.

worker.process.timeouts.termination

Timeout in milliseconds. This time out determines when the worker process implementation will forcibly kill the worker process if it has not terminated after this timeout has passed since it was asked to terminate.

worker.process.timeouts.communication

Timeout in milliseconds. This time out is the default timeout that determines the time passed after which the worker process implementation will forcibly kill a worker process if a message request has not returned.

worker.debug

Debugging for the worker process will be configured if this property is set to true and the home process has debugging enabled. Otherwise the worker process will not be configured for debugging.

worker.debug.port

The debug port to use for this worker process, if it is configured for debugging.