Menu

Category Archives: code generation

Cloudfier is an approach for building business applications, and since role-based access control (RBAC) is such an important thing for any business application, Cloudfier is bound to provide support for modeling what users can do to application objects. Here is our plan.

[There are several proposals by third-parties on how to do security with UML (remember, TextUML is UML), but the OMG itself has not officially adopted any so far. So, since there is no standard and no clear 3rd-party winner, I decided I might as well make up my own approach, tailor-made for the needs of business applications.]

The gist of the idea:

classes, attributes and operations may declare “access constraints”, which are UML constraints specialized on describing how accessible an element (in practice, classes, attributes and operations);

access constraints are defined for one or more user roles – one element may have multiple constraints, matching different roles;

the constraint specification may be an expression that (typically) will take the current logged-in user/actor in account in order to decide access should be allowed.

Profile changes

TextUML models, by default, can count on a UML profile called mdd_extensions. This profile defines several extensions to the base UML standard, for things such as allowing one to mark a class as a test class, elements as debuggable, activities as closures, blocks as initialization blocks, type casts, attribute derivations etc. The mdd_extensions profile was enhanced to support the idea of role classes and access constraints

Role classes

A role class is a class that represents a role an actor can play. Here is the definition of the stereotype:

(* A role class is a class that represents the role for a user. *)
stereotype Role extends UML::Class
end;

Access constraints

Access constraints are constraints with the “Access” stereotype applied to them. The stereotype allows setting:

the roles affected – user roles that are covered by this constraint;

the capabilities allowed – what users with one of the roles can do. Values are values of the AccessCapability enumeration.

The constraint specification, which is a boolean value specification (could be just a constant, or a complex expression), determines (in addition to the user roles) whether the constraint applies. See the source that defines those extensions:

For simplicity, I omitted details on the role classes, but they are normal classes otherwise and can have any attributes you may want, may specialize other classes etc.

But the use of the “allow” keyword to declare access constraints should be clear. If it isn’t to you, please provide feedback, here or on the project issue tracker.

Great, when can I use this?

TextUML Toolkit users: you should be able to use this if you update the plug-in in Eclipse now.

Cloudfier users: Cloudfier needs to honor the new access control features when running the model natively, or generating code, and some work is required before that happens, so you will need to wait a little longer to use this feature in your Cloudfier applications.

No matter which tool you use (or even if you don’t use any of them), if you have any opinion on the choices made, your feedback is really quite welcome.

It is been a while since I meant to have this implemented, but Cloudfier’s Expert4J (which if you don’t know is a gap-free code generator for JavaEE) finally got support for generating JPA queries that have subqueries.

For example, take this query in TextUML, which returns all clients that have at least one invoice that has yet to be submitted to the client:

The main challenge here was around generating the variable names when accessing the query roots. That required beefing up Expert4J with deeper data flow analysis. It was tough, and almost overwhelming (given the short windows I had to work on this between client work), but it is finally done, and I am proud of the result. You can see tests for this and other query-related code generation features in QueryActionGenerator.xtend.

I am no JPA expert, so if you are familiar with JPA and you have feedback on where the generated code could be improved, please let me know (and if you know people that grok JPA, please pass this post on to them). The entire code for the example Time Tracking and Invoicing application is available here.

Back in March I attended my first EclipseCon. It was great finally meeting so many people from the Eclipse community I only talked to before via email/newsgroups/bugzilla/twitter/LinkedIn. Also, nice to see again some of the Genologics folks living in the Bay Area, and some of the OTI/IBM Ottawa peeps I worked with when I was an Eclipse committer back in the day.

I also presented a session: “Generating Business Applications from Executable Models“. Feedback was mostly positive, some good points about the delivery (rushed things unnecessarily at the end to finish on time, not knowing there was a good gap between sessions). Here are the slides:

This is the last installment to a series started about two weeks ago. If you remember, I set off to build a code generator that could produce 100% of the code for a business application targeting the MEAN (for Mongo, Express, Node.js and Angular) stack. I am writing this a day after the presentation took place.

So, the big day finally arrived. Unfortunately, the presentation was not one of my best. If my delivery was not that great, to make matters worse, turns out my idea of exposing MDD (with Javascript as target platform) to a crowd seeking wisdom on Javascript didn’t work very well, people didn’t seem be interested in modeling and code generation at all. Something to take into account in the future. Anyways, the slides (in Portuguese) appear below this post.

Also, the code generator was not complete (was anyone else surprised?), so I couldn’t show code running for many features, and instead had to show the generated code (that looked almost right but not quite there yet). I guess that contributed to a less interesting presentation.

On the bright side, a lot of progress on the code generator was made. Take a look at the latest state of the generated code. I am quite proud of what is there now. But there is still more progress to be made until at least the sample applications I have all translate to feature complete and correct MEAN applications.

This is another installment to a short series started about 10 days ago. If you remember, I am building a code generator that can produce 100% of the code for a business application targeting the MEAN (for Mongo, Express, Node.js and Angular) stack.

What happened since the last update

Since the previous post, a lot of progress has been made. Much of the code for a domain model class is being generated (including state machine support), as you can see below (bear in mind there are still quite a few gaps – but I still have 3 days!):

Xtend is still making the difference

What an awesome language for writing code generators Xtend is. See, for example, how simple is the code to implement state machine support in Javascript, including guard conditions on transitions, and automatically triggering of on-entry and on-exit behavior:

Next steps

Sorry if I am being a bit too terse. Will be glad to reply to any questions or comments, but time is running out fast and there is lots to be done still. For the next update, I hope to have Express route generation for the REST API and hopefully will have started on test suite generation. Let’s see how that goes!

This is another installment to a short series started last week. If you remember, we are building a code generator that can produce 100% of the code for a business application targeting the MEAN (for Mongo, Express, Node.js and Angular) stack. This is what happened since the previous post:

I decided to go ahead with UML (TextUML) as modeling language over building a dedicated (more business-oriented) modeling language using Xtext, for time reasons only.

I am still using the Xtend language for writing the generator, and what a difference it makes. As I apply Xtend in various code generation use cases, it becomes evident Xtend was designed as a language for writing code generators. I doubt there is any language out there that can beat it at traversing and making sense of input models, and rendering a textual output.

Progress: generating Mongoose domain schema/models

Since the last update, I made some progress on generating Mongoose schema and models.

The generator is basically traversing the UML activity that defines the behavior of the operation at hand and translating UML actions to the corresponding Javascript fragments. Note that the generator is quite naive at mapping from UML to Javascript code (1:1 right now). It is also only tested in the test case above. Expect that to improve in the next updates.

Code generation tip: use local variables to preserve intent in Xtend templates

First code generation tip I have is a simple one: use local variables to preserve intent in your Xtend templates via sensible naming. This is a fragment from the Mongoose code generator:

Notice how the schemaVar, modelVar and modelName variables (which represent Javascript local variables and expressions) help with making the template easier to make sense of. I am not defending a template should only have local var references as code; I actually prefer leaving delegation to other generation methods in the template, instead of being pre-invoked and results stored in local variables, to leave the structure of the template easier to understand. Also, note that even if two things happen to look the same, they may not have the same meaning, and as such may be better as different variables (case in point, modelVar and modelName).

This is it for today – for the next update I hope to have query generation covered.

In two weeks time, I will be presenting at the Javascript track of the The Developers’ Conference (or TDC), a regional developer-centric conference. The presentation is titled: “Developing Business Applications on the MEAN Stack Using Model-Driven Development”.

The goal is to show how, from a model that defines the entities, attributes, relationships, states, queries, and actions for some business domain, you can generate a complete (i.e. including behavior) Javascript application that runs on the MEAN stack (which stands for MongoDB, Express, AngularJS, and Node.js). One benefit is that you get to use the shiny new toy of the day and yet remain able to move on to different stacks in the future if (read: ‘when’) needed. Another is that you get clear separation between business and technology, whose virtues I extolled in the past.

The presentation is on October 16. That leaves me with nearly two weeks to prepare. TDC presentations are mostly around showing code, preferably running live, and that code generator won’t write itself, so today I am setting off to build it.

Current status – T-14 days

This is what I am starting from:

I have built a few code generators before, some that do 100% code generation

I have played with all the elements of the MEAN stack, but I am not an expert in any of them (nor should I need to be – just need to find some good examples to borrow from)

I have settled on using Xtend for the code generator (blows everything I used before out of the water, including Velocity, XSLT(!), StringTemplate and Groovy). I have a skeleton started here.

I am still deciding between two different approaches for modeling the application: using UML (via TextUML) or a new (mostly general purpose) language I have been working on (some progress here). Need to make up my mind soon.

Ok. So I am off to building this thing. Wish me luck, and if you feel like chiming in on MDD/Xtext/Xtend/UML or Mongo/Express/Angular/Javascript, please do. I will be posting on this blog with updates for the next two weeks, including a report after the presentation, which I hope will be a happy ending. I also intend to work in the open so you will be able to see my progress if you follow my activity on github.

An upcoming feature in Cloudfier is the automatic generation of fully functional user interfaces that work well on both desktop:

and mobile browsers:

This is just a first stab, but is already available to any Cloudfier apps (like this one, try logging in as user: test@abstratt.com, senha: Test1234). Right now the mobile UI is read-only, and does not yet expose actions and relationships as the desktop-oriented web UI does. Watch this space for new developments on that.

The case against generated UIs

Cloudfier has always had support for automatic UI generation for desktop browsers (RIA). However, the generated UI had always been intended as a temporary artifact, to be used only when gathering initial feedback from users and while a handcrafted UI (that accesses the back-end functionality via the automatically generated REST API) is being developed (or in the long term, as a power-user UI). The reason is that automatically generated user-interfaces tend to suck, because they don’t recognize that not all entities/actions/properties have the same importance, and that their importance varies between user roles.

Don’t get me wrong, we strongly believe in the model-driven approach to build fully functional applications from a high-level description of the solution (executable domain models). While we think that is the most sane way of building an application’s database, business and API layers (and that those make up a major portion of the application functionality and development costs), we recognize user interfaces must follow constraints that are not properly represented in a domain model of an application: not all use cases have the same weight, and there is often benefit in adopting metaphors that closely mimic the real world (for example, an audio player application should mimic standard controls from physical audio players).

If model-driven development is to be used for generating user interfaces, the most appropriate approach for generating the implementation of such interfaces (and the interfaces only) would be to craft UI-oriented models using a UI modeling language, such as IFML (although I never tried it). But even if you don’t use a UI-oriented modeling tool, and you build the UI (and the UI only) using traditional construction tools (these days that would be Javascript and HTML/CSS) that connect to a back-end that is fully generated from executable domain models (like Cloudfier supports), you are still much but much better off than building and maintaining the whole thing the traditional way.

Enter mobile UIs

That being said, UIs on mobile devices are usually much simpler than corresponding desktop-based UIs because of the interaction, navigation and dimension constraints imposed by mobile devices, resulting in a UI that shows one application ‘screen’ at a time, with hierarchical navigation. So here is a hypothesis:

Hypothesis: Mobile UIs for line-of-business applications are inherently so much simpler than the corresponding desktop-based UIs, that it is conceivable that generated UIs for mobile devices may provide usability that is similar to manually crafted UIs for said devices.

What do you think? Do you agree that is a quest worth pursuing (and with some likelihood of being proven right)? Or is the answer somehow obvious to you already? Regardless, if you are interested or experienced in user experience and/or model-driven development, please chime in.

Meanwhile, we are setting off to test that hypothesis by building full support for automatically generated mobile UIs for Cloudfier applications. Future posts here will show the progress made as new features (such as actions, relationships and editing) are implemented.

Here at Abstratt we are big believers of model-driven development and automated testing. I wrote here a couple of months ago about how one could represent requirements as test cases for executable models, or test-driven modeling. But another very interesting interaction between the model-driven and test-driven approaches is test-driven code generation.

You may have seen our plan for testing code generation before. We are glad to report that that plan has materialized and code generation tests are now supported in AlphaSimple. Follow the steps below for a quick tour over this cool new feature!

Create a project in AlphaSimple

First, you will need a model so you can generate code from. Create a project in AlphaSimple and a simple model.

A code generation test case is defined as a pair of templates: one that produces the expected contents, and another that produces the actual contents. Their names must be expected_<name> and actual_<name>. That pair of templates in the test suite above form a test case named “pojo_enumeration”, which unsurprisingly exercises generation of enumerations in Java. pojo_enumeration is a pre-existing template defined in the “Codegen – POJO templates” project, and that is why we have a couple of projects imported in the mdd.properties file, and that is why we declare our template suite as an extension of the pojo_struct template group. In the typical scenario, though, you may would have the templates being tested and the template tests in the same project.

Fix the test failures

If you followed the instructions up to here, you should be seeing a build error like this:

which is reporting the code generated is not exactly what was expected – the template generated the enumeration with an explicit public modifier, and your test case did not expect that. Turns out that in this case, the generated code is correct, and the test case is actually incorrect. Fix that by ensuring the expected contents also have the public modifier (note that spaces, newlines and tabs are significant and can cause a test to fail). Save and notice how the build failure goes away.

That is it!

That simple. We built this feature because otherwise crafting templates that can generate code from executable models is really hard to get right. We live by it, and hope you like it too. That is how we got the spanking new version of the POJO target platform to work (see post describing it and the actual project) – we actually wrote the test cases first before writing the templates, and wrote new test cases whenever we found a bug – in the true spirit of test-driven code generation.

Not related to preconditions, another case assertions can be automatically generated is if a property is required (lowerBound > 0):

public void setNumber(String number) {
assert number != null;
...
}

Imperative behavior

In order to achieve 100% code generation, models must specify not only structural aspects, but also behavior (i.e. they must be executable). For example, the massAdjust class operation in the model is defined like this:

Set processing with higher-order functions

Any information management application will require a lot of manipulation of sets of objects. Such sets originate from class extents (akin to “#allInstances” for you Smalltalk heads) or association traversals. For that, TextUML supports the higher-order functions select (filter), collect (map) and reduce (fold), in addition to forEach already shown earlier. For example, the following method returns the best customers, or customers with account balances above a threshold:

Would you hire AlphaSimple?

Would you hire a developer if they wrote Java code like AlphaSimple produces? For one thing, you can’t complain about the guy not being consistent. Do you think the code AlphaSimple produces needs improvement? Where?

Want to try by yourself?

There are still some bugs in the code generation that we need to fix, but overall the “POJO” target platform is working quite well. If you would like to try by yourself, create an account in AlphaSimple and to make things easier, clone a public project that has code generation enabled (like the “AlphaSimple” project).

We would like to support automated testing of templates in AlphaSimple projects. I have been “test-infected” for most of my career, and the idea of writing code generation templates that are verified manually screams “unsustainable” to me. We need a cheap and easily repeatable way of ensuring code generation templates produce what they intend to produce.

Back-of-a-napkin design for code generation testing:

by convention, for each test case, declare two transformations: one will hardcode the expected results, and another will trigger the transformation to test with some set of parameters (typically, an element of a model). We can pair transformations based on their names: “expected_foo” and “actual_foo” for a test case named “foo”

if the results are identical, the test passes; otherwise, the test fails (optionally, use a warning for the cases where the only differences are around layout, i.e., non significant chars like spaces/newlines – optionally, because people generating Python code will care about layout)

just as we do for model test failures, report template test failures as build errors

run template tests after model tests, and only if those pass

(cherry on top) report text differences in a sane way (some libraries out there can do text diff’ng)

Does that make sense? Any suggestions/comments (simpler is better)? Have you done or seen anything similar?

This coming Thursday I will be doing a presentation entitled “Code generation – going all the way” to the Vancouver Island Java User Group.

The plan is to take the audience from the most basic ideas around generating code from models, visiting approaches increasingly more sophisticated, analyzing their pros and cons, all the way to full code generation based on executable models.

In the process, we will be taking a cursory look at some code generation tools in the market, culminating with a preview of the upcoming release of AlphaSimple, our online modeling tool, which will support executable modeling and full code generation.

If you are in Victoria and think developing business applications got just way too complicated and labor-intensive, and that there must be a saner way to build and evolve them (no matter what platforms you use), come to this presentation and learn how executable models and full code generation can fix that.

So it finally hits Ted, The Enterprise Developer: all his enterprise applications consisted of the same architectural style applied ad nauseum to each of the entities they dealt with. And Ted asks himself: “why am I wasting so much time of my life doing the same stuff again and again, for each new application, module or entity in the system? The implementation is always the same, only the data model and business rules change from entity to entity!”

The Epiphany

So Ted figures: “just like I write code to test my code, I will write code to write my code!”.

Ted decides that, for his next project, he will take the approach of code generation. Ted is going to model all domain entities as UML classes, and have the code generator produce not only the Java (or C#, or whatever) classes, properties, relationships and methods, but all the boilerplate that goes along with it (constructors, getters, setters, lazy initialization, etc). “This is going to be awesome.”

The Compromise

One of the first things Ted realizes is that since his UML models are pretty dumb and contain no behavior (“UML models can have no behavior, right?”), there is no way to fully generate the code. Bummer.

Ted still has all these empty methods that need to be filled in for the application to be fully functional. So he starts filling them in with handwritten code.

Reality Kicks In

Things are looking great. Ted is already filling in the stubbed methods for the tenth entity in the system. But then he realizes there is a problem in the generated code. It would be an easy fix in his generator, and rerunning it will fix the problem everywhere (isn’t that beautiful?). However, Ted would end up losing all changes he had made so far. Argh.

Any way out?

Ted thinks: “shoot, this was going so well, look at how much code I produced in so little time. There must be a solution for this.”

He almost feels like backing up his current code somewhere, regenerating the code (losing his changes) with the new generator, and then adding his handwritten code back (“Just this once!”)”. But he knows better. At some point he will need to regenerate the code again (and then again, and again…), and his team won’t buy the approach if it is that complicated to fix problems or to react to changes. It will look pretty bad.

He opens a new browser tab, and starts thinking about the best search terms he should use to search for a solution to this problem…

In the next episode, Ted, The Enterprise Developer, continues his saga in search for a fix to his (currently) broken approach to code generation. If you have any ideas of what he should try next, let me know in the comments.

This just in: you can now generate code for AlphaSimple projects from within your Maven-based project build! That gives you a convenient way of getting the code generated by AlphaSimple into your (and your teammates’) development environment, or in your automated builds.

How do you do that? Let’s see.

Step 0: create your model(s) and template(s)

You must have an existing project in AlphaSimple. This was the subject of a previous post. Read it first if you don’t know how to create models and templates in AlphaSimple. Make sure your project is shared.

Feeling lazy?

Okay… just copy and paste the pom definition from this file into your pom.xml. You can skip down to step 3, and it will work out of the box (generating code from a pre-existing shared project).

Which in summary is executing the generate goal of the AlphaSimple plugin during the generate-sources phase of the Maven lifecycle.

In the example above, the plug-in is configured to execute the generator at http://cloudfier.com/alphasimple/mdd/generator/rafael-276/simple, for the AlphaSimple sample project (see this post for how to obtain a similar URI for your own project).

Also, files will be generated at the specified location (which in the example above will map to target/generated-src/main/java). In order for them to be seen by the Java compiler, that location must be configured as a source directory, for instance, by specifying a non-standard source location in your module:

The generate-sources phase or any other phase that follows (such as compile, package, install etc see lifecycle reference) will cause the code to be regenerated. As you make changes to your models or templates in AlphaSimple, further runs of the generate goal will take those changes into account.

In the case of generating Java code, you will want to include at least the compile phase so you can tell whether the generated code is valid (if you get an error about generics not allowed in source level 3, see this).

What just happened?

The AlphaSimple Maven plugin does not know how to generate code, nor has dependencies on other Maven artifacts that do. All it does is to hit the code generation endpoint in the AlphaSimple REST API, and request code to be generated for the chosen target platform. It then just extracts that ZIP stream into the chosen location in the file system.

Conclusion
Once you create your models and templates in AlphaSimple (see previous post), it is very easy to include the generated code in your Maven-based projects. All you need to do is to include an execution of the AlphaSimple plug-in and point it to the generator of your choice. It is that easy. But don’t take our word for it, try it yourself and give us your opinion!

AlphaSimple supports StringTemplate as template language (check out this 5-minute introduction). In order to define a template in your AlphaSimple project, create a file with the .stg extension (in StringTemplate lingo, it is a template group file). You can use the example below, which for every class in a model, creates a text file that shows its name, and the names of its attributes and operations:

group simple;
outputPath(class) ::= <<
<! The 'outputPath' template is optional and determines the path of the file generated (the default is the class name) !>
<class.name>.txt
>>
contents(class) ::= <<
<! The 'contents' template is mandatory and is the entry point for generating the contents of the file from a class. !>
Class: <class.name>
Attributes: <class.ownedAttributes:{attr|<attr.name>};separator=", ">
Operations: <class.ownedOperations:{op|<op.name>};separator=", ">
>>

Again, remember to save this file.

Declare your custom template

To enable custom templates, you need to create an AlphaSimple configuration file (mdd.properties). It is a configuration file that drives the compiler and code generation in AlphaSimple. Your file can be as simple as this:

Both entries are mandatory. Ensure the line declaring the template matches the name you chose when creating the template file. Save this file.

Test your template

In order to test your custom template, if you have been using a guest account, you will need to sign up first (it’s free). Your project contents will be preserved.

First, publish your project (see button in the editor). Then, from your list of projects (“My Projects”), share your project (open lock button). For any future modifications to model, template or configuration file, you will need to publish your changes again. This will not be required in the future.

We are almost there. Since there is no UI for triggering custom generation yet, you will need to use the REST API, which is quite easy. Find out the numeric id of your project (from any link pointing to it). Then hit a URI with this shape:

which gives you access to all the objects that AlphaSimple project has: source files (model and template), the configuration file, the generated UML model and corresponding class diagram, and, what we are mostly interested here, all generators available. Note that it includes not only a generator for the custom template, but some other built-in generators as well. But lets ignore those for now, and open the generator URI for our custom template (named “simple”). Voila, this is what you should see:

We hope this very simple example gave you an idea of how you can generate code from UML models using AlphaSimple and StringTemplate (even if it doesn’t really generate actual code). In the example template, we only navigate from a class to its operations and attributes, and access their names, but your template has virtually any information from the underlying UML model available to generate from.

If you would like to see more interesting models and actual code generation templates, browse the shared project area. For now, there is currently just one project with an elaborate template. Clone it and model (and generate) away. If you have any feedback, just post a comment here or check the AlphaSimple contact page.

There was somestrong (but polite) reaction to somecomments I made about the role of model-to-model (M2M) transformations in model-driven development.

My thinking is that what really matters to modeling users (i.e. developers) is that:

they can “program” (i.e. model) at the right level of abstraction, with proper separation of concerns

they can automatically produce running code from those models without further manual elaboration

In that context, M2M is not a requirement. That is not to say that to support #2 above, tools cannot use model-to-model transformations. But that is probably just an implementation detail of that tool, all that modeling users care is that they are able to model their solutions and produce working applications. Of course, modeling experts will be interested in less mundane things, and more advanced aspects of modeling.

Also, my comments were about model-driven development (MDD), and not model-driven engineering (it seems most people disagreeing with me are from the MDE camp). To be honest, I didn’t even know what MDE meant until recently (and I know that MDE contains MDD), and have just a superficial grasp now. To be even more honest, I am not interested in the possibilities of the larger MDE field. At least not for now. I will explain.

You see, I think we still live in the dark ages of software development. I want that situation to change, and the most obvious single thing that will let us do that is to move away from general purpose 3GLs to the next level, where developers can express themselves at the right level of abstraction, and businesses can preserve their investment in understanding their domain while at the same time being able to take advantage of technological innovation. Hence, my deep interest in making MDD mainstream.

I see value in the things beyond MDD that MDE seems to be concerned with (mining existing systems for models, model-level optimization). I just don’t think they are essential for MDD to succeed. Thus, I prefer to just cross that stuff off for now. We need to lower the barrier to adoption as much as we can, and we need to focus our efforts on the essentials. The less concepts we need to cram into people’s minds in order to take MDD to the mainstream, the better. It is already hard enough to get buy-in for MDD (even from very smart developers) as it is now. It does not matter how powerful model technology can be, if it never becomes accessible to the people that create most of the software in the world.

Model interpretation vs. code generation? There were recently two interestingposts on this topic, both generating interesting discussions. I am not going to try to define or do an analysis of pros and cons of each approach as those two articles already do a good job at that. What I have to add is that if you use model-driven development, even if you have decided for code generation to take an application to production, it still makes a lot of sense to adopt model interpretation during development time.

For one, model interpretation allows you to execute a model with the fastest turnaround. If the model is valid, it is ready to run. Model interpretation allows you to:

play with your model as you go (for instance, using a dynamically generated UI, like AlphaSimple does)

run automated tests against it

debug it

All without having to generate code to some target platform, which often involves multiple steps of transformation (generating source code, compiling source code to object code, linking with static libraries, regenerating the database schema, redeploying to the application server/emulator, etc).

But it is not just a matter of turnaround. It really makes a lot more sense:

you and other stakeholders can play with the model on day 1. No need to commit to a specific target platform, or develop or buy code generators, when all you want to validate is the model itself and whether it satisfies the requirements from the point of view of the domain. Heck, you might not even know yet your target platform!

failures during automated model testing expose problems that are clearly in the model, not in the code generation. And there is no need to try to trace back from the generated artifact where the failure occurred back to model element that originated it, which is often hard (and is a common drawback raised against model-driven development);

debugging the model itself prevents the debugging context from being littered with runtime information related to implementation concerns. Anyone debugging Java code in enterprise applications will relate to that, where most of the frames on the execution stack belong to 3rd-party middleware code for things such as remoting, security, concurrency etc, making it really hard to find a stack frame with your own code.

Model-driven development is really all about separation of concerns, obviously with a strong focus on models. Forcing one to generate code all the way to the target platform before models can be tried, tested or debugged misses that important point. Not only it is inefficient in terms of turnaround, it also adds a lot of clutter that gets in the way of how one understands the models.

In summary, regardless what strategy you choose for building and deploying your application, I strongly believe model interpretation provides a much more natural and efficient way for developing the models themselves.