Main menu

Category Archives: MDSD

Post navigation

As I am addicted to code generation and DSLs, the CodeGeneration conference in Cambridge is always a must each year. Last year I could not make it, since I had the chance to speak at EclipseCon North America, which was in the same week. This year Mark took EclipseCon into his considerations (it was last week), so me and my colleagues from itemis will be there again. Actually, this year we will be more itemis guys then ever. Mark already assumed in his opening words at CG2011 that almost everyone from itemis would be there, this year we prove itemis is larger. I think our company is so close related to the conference theme that it is natural that we have lots to present, and much interest to hear others about what they are doing and have learned in the past.

Before the actual CodeGeneration conference starts on Wednesday, there are some pre-conference activities. My colleagues Holger Schill and Moritz Eysholdt will hold an intensive 2-day Xtext workshop on monday and tuesday.

I will arrive monday noon, since I take the early flight directly from my hometown Dortmund to London Luton. From there, I have to take a 2 hour bus trip to Cambridge. In the evening, I plan to meet Holger, Moritz, Meinte Boersma and hopefully some others in the Castle Inn pub. When you arrive on monday, drop into the Castle Inn roughly at 20 PM (I guess we go to a restaurant before). You can reach me there on my mobile phone:

On tuesday this year’s Language Workbench Challenge summit takes place. We have 14 submissions (wow!) for the LWC13 assignment. I have been working on the Xtext submission together with 2 colleagues, Johannes Dicks and Thomas Kutz. The results are available as open-source project lwc13-xtext at Eclipselabs. We have prepared a detailed step-by-step tutorial as submission paper. The resulting document LWC13-XtextSubmission.pdf is available for download. On the project homepage I have placed today a quick start tutorial. Oh boy, this project did cost some time. The actual solution is not much code, but as often, it is harder to write less code than more. It could be even less, but we took care of that the code is readable and understandable. And writing the document is at least that much work as the implementation.

Every presenter has only 15 minutes to present their approach. 15 minutes presentation for that much work. I guess that the other participants did invest also quite some time. Both of my co-authors got the chance to visit the conference, and Thomas will support me with the presentation. He will demo the resulting JSF application and DSL source code while I do the main talking. We did a test run of the talk yesterday evening, and easily exceeded 20 minutes. I think Angelo will bring his egg timer again, which begs no pardon with the talk time of speakers. But only that way we will be able to run 14 talks on one day. We will have to restrict on the most important aspects only.

Besides my colleagues Thomas and Johannes also Sven Lange will join us then. I have the pleasure to work with him in my long-time project at Deutsche Börse (German Stock Exchange), now since over half a year. Sven is a highly motivated, skilled and smart person. It is still the same project I have reported about at CodeGeneration 2009 together with my former colleague Heiko Behrens. Sven is full-time working on this project, while I am for 20% scheduled. We have migrated here a huge code generator project from Xpand to Xtend. This alone would be worth an experience report session. Sven is working on Xtend support for IntelliJ, which he might present in a Lightning Talk on wednesday.

Currently we are finalizing our presentation slides. Boris has been ill for some days and busy with a new release of their ASES product, a workforce management suite. This is really an interesting customer and project. They are evolving this product now for 25 years, and they make use of code generation for ages. I think one can say that it helped them to survive in their business, some contestants did not manage to make larger platform shifts and died. Most of them tried a big-bang replacement, but the business is too fast evolving so that the target is moving steadily. Boris and I will speak about this product and how it has been evolved over the years. ATOSS was one of the first major projects using openArchitectureWare 4 (which mainly means Xpand), and now they are currently preparing a shift to Xtend.

I am glad that this talk is already on wednesday, I never come to rest until I finished some talk. After it, I can just relax and enjoy the conference. I am expecting some interesting insights on different approaches. Especially experience reports are interesting for me. I did not finally decide which sessions I will attend. At the moment I plan to see John Hutchinson with “The Use of Model-Driven Development in Industry” in the morning, and Darius Silingas with “Why MDA Fails: Analysis of Unsuccessful Cases” in the afternoon.

In the evening it is again time for the punting boat tour. I already attended three times, but it will be great fun again for sure. Let’s hope the weather is not too bad. I saw a prediction of ~10°C and possibility of light shower. In the past we had luck, and on a warm, sunny day the tour is double fun. However, I’ll better put an umbrella into the suitcase.

On thursday I have again an active part in the hands-on session “Have Your Language Built While You Wait”, which is hosted by Risto Pohjonen from MetaCase. The idea of this session is that attendees can get a DSL with the language workbench of their choice built with the help of experts for this workbench. Of course I will assist on Xtext. If you had no chance to visit the Xtext workshop this might be your chance to get some hands on Xtext. This session was already run last year successfully. Last year my colleague Benjamin Schwertfeger took over the Xtext part, since we were at EclipseCon.

There are also some other talks around Xtext and Xtend. Both have been released in version 2.4 on March 20th, which brings some interesting new features. Most notably in regard to code generation are the Active Annotations. I guess this is also part of what Sven Efftinge will adress as future of code generation in his keynote “The Past, Present and Future of Code Generation” wednesday morning. More details he will present together with Sebastian Zarnekow in the tutorial “Internal DSLs with Xtend” (thursday 10:45-12:00). The last Xtext related talk will be from Moritz Eysholdt, called “Executable Specifications for Xtext Languages” (friday 10:45-12:15). I am actually not sure which of these talks I will attend personally. They are most relevant for my work, and I don’t work close enough with them to catch everything new in Xtext on my own. Thus, I’ll definetely would learn important aspects. On the other side, there are also other interesting talks in parallel.

The coming week will be an intensive experience with lots to learn and interesting persons to meet. Although I will really enjoy this time, I will be glad when I finally come back home. At the moment, my family is ill and I hope that I get not infected these days. I have been looking forward and worked for this event, so I am crossing fingers when I can board monday morning healthy.

I am sure the organizing team around Mark and Jacqui will do again a great job.

The Eclipse Modeling Project provides the world’s leading set of tools and frameworks that are used for successfully applying model driven software development techniques in various areas. Successful adoption are known in Enterprise Computing, Embedded System Development, Mobile Development etc. But what about Game Development? I have not heard about Game productions that use Eclipse Modeling or Model Driven Software Development in general so far. I cannot know about all projects in the world, but at least it is an indicator that this development technique is at least not wide adopted in the branch of Game Development.

Game Development is highly complex, developed in multidisciplinary teams under high time pressure and quality requirements. And the complexity is even growing, whilst time pressure also. Time-to-market is everything there. If your game comes too late, you are out. If you don’t use the latest technologies, you are lame. How could such projects ever be successful just by coding and hacking? I could imagine that game developers are just too busy with developing their games in a traditional way to think how they could gain speed and quality by applying software engineering techniques like MDSD.

I would not wonder if they associate MDSD with drawing UML diagrams and wasting time clicking and drawing useless stuff. Model Driven Software Development is everything else than useless. It helps raising the level of abstraction, speeding up development and gaining quality. If applied correctly, of course. Of course they think their kind of software development is special and completely different than other disciplines. But let me say, it’s not the case. Every piece of software has generic parts, schematic parts and parts that don’t fit into one of the previous sections. And for the schematic parts, MDSD can always help. Don’t tell me that a multi-million, mission-critical enterprise project is less challenging than game development.

One of the most promising things for game development can be the usage of Domain Specific Languages (DSLs), especially textual ones. With Xtext 2.0 the development of textual DSLs with tight integration of expression languages and code generators has become easier than ever before. If you don’t ever tried Xtext, do it!

The structure of typical Xtext projects does not match the standard layout of Maven projects. Xtext projects more adhere to standard Eclipse project layout, which means

manual implemented Java sources are in folder /src

the plugin Manifest is /META-INF/MANIFEST.MF

generated sources go to /src-gen

In my customer’s project Xtext sources are built with Maven, and also the sources produced by Xtext are produced within the Maven build, using the Fornax Workflow Maven plugin. Until now we have adjusted the Maven build to match the standard Xtext project structure, which requires some configuration in the build section of the POM like follows:

Another requirement from the build team is that the actual output directory ‘target’ should be configurable. This means mainly that we have to use properties that Maven uses to refer to the source and target directories (project.build.sourceDirectory and project.build.directory), so that the build could override just these settings by passing a system property and all output gets produced to and compiled from an alternative directory structure.

Of course this is possible with small changes, but you have to know where.

Xtext Generator Worklow

In order to produce the output of the Xtext generator to different directories than default (src, src-gen => src/main/java, target/generated/java) the MWE workflow file has to be changed. For the generator component we have to override the properties srcPath and srcGenPath. Further, these output directories should be parametrized by the same properties that Maven uses, namely project.build.directory and project.build.directory. These properties need to be configured in the Generate<MyDSL>.properties with defaults.

pom.xml: build section

The Java source directories also contain non-Java resources, like the workflow, grammar file etc. Normally this would go to the resources directory for Maven projects, but Xtext (0.7.2) cannot be configured to produce resource files to another directory than Java sources. On the other side those resources need to be found on the classpath during the build. This requires that we add the Java directories as resource directories in the build section of the POM. This settings already existed before, it just has been adopted.

The <sourceDirectory> entry is not necessary anymore, since the main Java source directory is now src/main/java, which follows the Maven standard directory layout and gets automatically compiled.

maven-clean-plugin

Xtext produces output into two projects – the grammar project and the UI project. Now if the generated UI sources are produced below /target calling ‘mvn clean install’ would have an undesired side effect. Maven builds both modules after each other, so when the UI module is built with the goals ‘clean install’ the target directory is removed and the previous generated sources get lost.

The solution is that the maven-clean-plugin must be deactivated for the UI module and the grammar module must clean the target directory for the UI module.

I got the chance to get a bit more familiar with Eclipse SMILA and started development of a configuration toolkit with Xtext. Target is to develop a prototype which enables an easier setup of a valid SMILA configuration by use of a textual DSL with all the benefits which you get from using such a DSL, like semantic validation, content assist etc. SMILA is configured by a bunch of XML files conforming to defined XSDs. Sometimes information is spread around different configuration files, and misconfiguration leads to runtime errors or even to no error at all.

SMILA is an extensible framework for building search solutions to access unstructured information in the enterprise. Besides providing essential infrastructure components and services, SMILA also delivers ready-to-use add-on components, like connectors to most relevant data sources. Using the framework as their basis will enable developers to concentrate on the creation of higher value solutions, like semantic driven applications etc.

To give a rough imagination: You can configure different kinds of agents which search media for information (e.g. files, web pages etc.), and relevant data is extracted from those resources and published to some queue (ActiveMQ is used by default). Listeners react on entries and execute BPEL processes to process the information. Final goal is to index the data in stores, which can be searched by clients. Lucene is used by default as indexing engine.

After finishing the checkout I finally was able to follow again the good 5 Minutes to Success tutorial. But don’t expect you can finish the tutorial in 5 minutes ;-) One word to mention: SMILA requires Java 6, and my development IDE is started by default with Java 5. So I needed to configure Java 6 for my target platform and also had to add the RCP delta pack, since 1.6 requires 64 bit libraries on Mac.

Contained in the sources is a example configuration project SMILA.application, which can be started by a launch configuration in the SMILA.launch project. Here is a small screenshot of the SMILA.application project structure.

The application contains several XML configuration files and their XSDs in a structure which reflects the plugins that are used. The tutorial explains small changes to the configuration and which files have to be changed, but for setting up a brand new project it might become more complicated if one is not familiar with the structure.

Starting the prototype

First I have to make clear that the following is early development state. I plan to extend the functionality when getting some time again. Since I’m involved often at customers I cannot tell how fast I progress now. At least I get the possibility to spend some days in the near future on it, so I’m expecting to have something useful in the near future.

I created the Xtext projects for the SMILA DSL and added some first rules. After running the MWE workflow Xtext generated the project infrastructure.

SMILA project wizard

When looking at the example project I recognized that a normal project setup would require copy/paste of an existing one and changing some files. Therefore extending the generated project wizard seemed to be a good starting point. The extended wizard now lets you set up a SMILA application with all the required files.

After finishing the wizard a project in the workspace is created. All static resources (esp. project structure and XSDs) are copied from the UI plugin into the new project and as a start some files are generated using Xpand with the information filled in into the wizard.

The wizard generated from the SimpleProjectWizardFragment was not so extensible for my case as it should be, so I had to copy some code from the generated classes and provide a manual implementation with some copied code. I think the fragment could be changed easily to improve and I will set up a change request on that later and post it to bugzilla.

At the moment the project wizard generates the following artifacts from the information provided on the pages:

SMILA DSL model file

log4j.properties

Launch configuration

Tomcat server config

Here you can see the project the wizard created:

Crawler configuration

The first configuration I targeted to describe with the DSL is the configuration of the FileSystemCrawler and FeedAgent. This is pretty straight forward, nearly 1:1 mapping. Here’s an excerpt from the appropriate configuration file “feed.xml” shipped with the example:

And here the same situation described in the DSL (the box with “caseSensitive” is there because I pressed CTRL+SPACE after the keyword “recursive” and the content assist proposes that “caseSensitive” could be entered here):

I decided that the record attribute name (in XML the Attribute#name property) can be omitted in the case that it matches the File attribute name, which I think often will be the case. Only if the names don’t match a mapping has to be done. Here the example is

FileExtension -> Extension

“FileExtension” is the File attribute name and “Extension” is the name of the Record attribute.

Flags are added in brackets and are optional (key, hash, attachment).

Builder Integration

Since Xtext Helios M4 a builder infrastructure was added to Xtext. I leveraged this infrastructure to generate the resulting configuration files on-the-fly when you save the DSL model. So if you, for example, add an “Include” line to your model the respective crawler config is automatically changed. Even better: When you rename your crawler, let’s say from “file” to “userdir_scanner” the configuration file “file.xml” gets deleted from your workspace and is replaced by “userdir_scanner.xml”!

After renaming the FileSystemCrawler:

Outlook

This is just the start of this project and many things have to be done now. I plan to use this project also as a good example for using the Xtext features properly, of course open sourced. Also I have to learn more about SMILA and the appropriate configuration. I’m in exchange with Sebastian Voigt from brox, co-lead of the SMILA project. With his help I think this project can be a valuable contribution to SMILA later.

Here are some features that I want to add to this project:

Complete language for covering the tutorial
In a first step at least everything that makes up the “5 Minutes to Success” tutorial should be possible to describe in the DSL and the configuration files should be generated from that description.

Integrate existing configuration files
I saw that some configuration files might not be worth to be mapped to a DSL and might be better left just in XML for editing. One example is the QueueWorkerConnectionConfig.xml, where the available brokers are defined. Of course from the DSL I want to refer to brokers at several places and I need to get them from this file. My first idea is here to use generate EMF models from the XSDs using the XSD importer. That makes it possible to reference types from that schema directly in the DSL. It should be like normal integration of existing Ecore models.

Validation
One of the major benefits that the DSL can provide is the ease to add validation on the models. Especially consistency constraints make sense, for example to prove that every queue where records are routed to must have a listener that processes that records further.

JDT integration
In the BPEL configuration files services are invoked. The services are qualified by their class names and parameters that can be passed correspond to properties. At Eclipse Summit 2009 Sven Efftinge and Sebastian Zarnekow showed a nice integration of Xtext with JDT to add content assist and validation on qualified Java classes.

Product build
The complete bundle, SMILA and the Configuration toolkit, should be available as a ready-to-use product. I’m planning to use Maven Tycho for setting up the build process.

These are just some few examples of what I can imagine for the future. I hope that I find or get some time to realize this.

Usually code generation is a purely sequential process. Since the model does not change during the generation of an artifact all content can be computed in the template where it is needed for the output. But sometimes there is the wish to defer the output to a later point of time during the generation of an artifact.

The typical use case for this is import statements. If for example you want to generate a Java class and want to import all used types then the following alternatives are given:

Compute the types that the about-to-be-generated class will use

Print out all type names full qualified whenever needed and organize the imports with a postprocessor. For Java code generation the Hybridlabs Beautifier is used widely.

However, both approaches do not seamlessly solve the problem. What really is needed is some kind of lazy evaluation in Xpand. Therefore Jos Warmer wrote a feature proposal once. The feature that he proposed for Xpand is called Insertion Point. The idea was to mark some point in the Xpand template where some code will be inserted at a later point of time. Code is evaluated into this insertion point when the content can be derived easier.

From this feature proposal Feature Request #261607 was created in Eclipse Bugzilla system. In this bug entry, and also offline, a lively discussion arose in the team. The challenges for this feature request were:

The Xpand language has a rather small set of keywords. Adding this feature, which is used in some cases only, should not introduce too much changes to the Xpand language

The latest proposal just introduces just one new keyword in Xpand, but requires an implementation pattern with Xtend function. The proposal is to add a keyword ONFILECLOSE to the EXPAND statement. By calling EXPAND with ONFILECLOSE the evaluation of the EXPAND statement is deferred until the FILE statement is closed. Any state that is used by the called definition is computed during the FILE evaluation. The EXPAND statement has to be evaluated with the execution context which is active when reaching the EXPAND statement.

Let’s see this by example. As example we take the project that Xpand’s project wizard creates, with small changes. The entities and types have now an additional ‘packageName’ attribute, and the both entities have been assigned different packages ‘entities1′ and ‘entities2′. Additionally entity ‘Person’ has a feature ‘birthday’ of type Date, which is mapped to java.util.Date. Therefore class Person.java has to import entities2.Address and java.util.Date. The used types should be collected when rendering instance variables and accessor methods, but inserted earlier in the code.

First take a look at the template code:

As you can see the definition ImportBlock is called for each Type instance (e.g. Entity and DataType instances) in the collection returned by the Xtend function UsedType(). In an alternative approach this function would be responsible to compute all the types that will be used by this Entity instance. But for the new implementation it just creates an empty list and returns that one. The implementation of the UsedType() function (in file GeneratorExtensions.ext) is:

create List[Type] UsedType (Entity e) : (List[Type]) {};

So at the time the Xpand engine reaches EXPAND ImportBlock… the list would be empty. Now note the ONFILECLOSE keyword at the end of the EXPAND statement. This one tells the engine now that the evaluation of this Xpand statement should be deferred until the file is about to be closed. During evaluation of the template code, in the FOREACH loop, another extension function addUsedType(f.type) is called. This one adds the type of the current processed feature to the collection returned by UsedTypes(). Therfore it is important that the UsedType() function uses the create keyword, since we want to create the collection on first access for one Entity and return the same instance when UsedTypes() is called later for that Entity again.

The function addUsedType() is used in the template like this:

«addUsedType(f.type)»

Xpand would print out the result of the function as string, but we don’t want to produce any output by calling the function. Therefore we assure that the function adds the type to the collection and returns an empty string:

addUsedType (Entity e, Type t) : UsedType(e).add(t) -> "";

During evaluation it could be that types were used multiple times within the template, but we want just one import statement per type. Further we don’t need imports for types from the java.lang package (here the packageName information for the DataType instances String and Integer is null) or for types that are in the same package like the entity. Therefore we transform the UsedType() collection before finally invoking the ImportBlock definition.

Lazy evaluation allows code generated with Xpand in an non-sequential matter. The proposed solution using ONFILECLOSE solves the desired Insertion Point feature by adding just one additional keyword to the Xpand language. It does not break existing template code. When accepted this code will be contributed to Xpand 0.8.0-M6 soon. For those who want to test it I have created a feature patch for the org.eclipse.xpand feature. The example project with the sources listed in this article can be downloaded here.

My colleague Heiko Behrens and I will have a session at the upcoming W-JAX conference.
The session is entitled “Mastering differentiated MDSD Requirements at Deutsche Boerse AG“, and we will share our experience about successfully applying MDSD approaches that we gained through our consulting tasks at the Deutsche Boerse. The project at the Deutsche Boerse is in many ways challenging and a great example how model driven software development really helps to master complexity. The Deutsche Boerse is developing together with the New York ISE a new Global Trading System (GTS), which is scheduled to start operations in 2011. High performance and reliability is crucial for this system, and it would not be possible to deliver both without means of modeling and code generation. Of course we leverageEclipse Modeling and openArchitectureWare for that. We held this session already at the Code Generation 2009 with overwhealming feedback, and in the meantime the story continued and we will include the new experiences into our talk.