Discussions

The Middleware Company has released the results of their latest research, a Maintainability Analysis of Model Driven Architecture. The research measures the gain in developer productivity from using an MDA approach to maintaining an existing application. Following TMCs research last year on using MDA to build a new application, this latest research endeavor had similar results: the MDA approach improved productivity by 37% over a traditional, "code-centric" approach.

This study had two teams fulfill the same development requirements from a common specification, one team using an MDA approach and the other using a more traditional approach centered on code-writing. In this study the teams executed a representative series of maintenance tasks on an existing application, which included enhancements to the database schema, business logic and UI, as well as Web service and J2CA client integration.

In addition to the quantitative results, the report details:

o the overall layout of the research approach
o the makeup of the two teams
o the development tasks both teams performed
o the developers' personal experiences -- what was easy or challenging for them
o the critical factors affecting productivity on both sides

This report makes an interesting, informative read for those concerned with development methodologies or simply curious about MDA.

This absolutely backs up our experience, both from a development point of view and from what our clients are telling us. We have put our proverbial eggs into the MDA basket with Integration Object. Similar to other tools like XMLSpy, Stylus Studio, Turbo XML, Castor, JAXB and others we can generate code from a basic model, usually an XML Schema. We've taken this further however by extending the validation of Schema to handle mode complex models like FpML and even SWIFT. We specialise on the code generation, supporting high performance XML Schema binding, "hand optimised" Serialization (using Externalization) and even JDO support (coming soon). Starting from a basic model, say a schema, you can generate the persistence, Swing or web forms (with complex validation) and chuck the generated Java objects into JavaSpaces for unbelievable performance.

Model Driven Architecture has definitely proven itself for us and our business.

A 37% improvement seems rather low, but then the requirements seem to have been a bit on the clean-cut side. Personally I'd have liked more of a mess, where they had to deal with at least one significant requirements change mid-course. During maintenance such changes tend to be related to invalid assumptions about the revised business model, but which are not exposed until work actually starts.

A 37% improvement seems rather low, but then the requirements seem to have been a bit on the clean-cut side.

Yes, for EJB 37% improvement is too low. I think you need to repeat this test 20 times or so to get right numbers. But it is more EJB problem than MDA advantage, EJB is very MDA friendly (there is a lot of garbage), I am not sure about numbers , but it must be more than 37% of garbage in EJB code .

But to be fair, you are not developing a business application per se. You are developing a tool. No?

You are solving the same common (but specific) problem several times over - conversion to from representationX (e.g. SWIFT) to representationY (e.g. FoobarML) with domain-specific validation.

For developing a one-off, end-to-end application (say, an electronic trading system) where the business (il)logic, production environment challenges, and user interaction constantly evolves over time, do you still think MDA would be a better bet than working productively at a lower level of abstraction?

After all, EJB was a higher level of abstraction for TP - yet some obviously find it too restrictive for "real work" ;-)

But to be fair, you are not developing a business application per se. You are developing a tool. No?

You are solving the same common (but specific) problem several times over - conversion to from representationX (e.g. SWIFT) to representationY (e.g. FoobarML) with domain-specific validation.

For developing a one-off, end-to-end application (say, an electronic trading system) where the business (il)logic, production environment challenges, and user interaction constantly evolves over time, do you still think MDA would be a better bet than working productively at a lower level of abstraction?

I'm always fair Nick, yes "we" are developing a tool but the tool is used by our clients to develop business applications using MDAs.

Even for a one-off e2e application like as you suggest, an electronic trading system, you don't have to always go the whole "hog" with one methodology or the other. There are messages and information that will be sent from one part of the app to the other, you can model these and then work the application logic around the generated code. Your application/business logic then talks through the APIs (interfaces) generated by the MDA tool and is no longer dependent on specific message/data implementations. It's an advantage, not the panacea of methodologies.

> After all, EJB was a higher level of abstraction for TP - yet some obviously find it too restrictive for "real work" ;-)

MDA isn't an abstration for anything, you often see projects starting with Erwin diagrams or UML, essentially they're doing the same thing, working on the model first and then letting the model "drive" the architecture, the problem with these tools however was that they were too heavyweight. Remember trying to start a project with Together? Great tool, bloody expensive and ran like a one legged dog.

I have to agree that a "test oriented architecture" / "test driven design" is a nice goal and often achievable. We too are test driven with several thousand JUnit test for our deployed code. So, are we Agile, XP, MDA or Test driven? Probably a combination of all and more, if the first one is the driver then which come first the test of the model? Interesting question, I'm not sure there's a simple answer. I can say that on average we are more test driven, more Agile and more model driven than most software projects I've seen.

You know me Nick (we worked together for about two years), I've never been one for one architecture over the exclusion of all the others, apart from that is, Java over .NET :-), we are test driven, model driven and agile. We preach Agile processes (we run courses on it), we preach tests driven design (we even sell a JUnit based framework for testing SWIFT but our core product is a modelling tool that allows you to generate code, XML Schema and databases etc. based on a model. Close to but perhaps not the definition of a MDA.

I am not surprised that the traditional team was slower than the MDA team, given that the Petstore implementation they were working was heavy on J2EE specs, especially entity beans. I'd love to see a comparison where the gloves were off, and traditional team could chose any means of implementing the Petstore. A Spring, Pico and Prevayler combination comes to mind, because I have found that I can attain and sustain a fine velocity using these three.

What I conclude from this study is that MDA *may* be faster if you're implementing your app using a full J2EE stack. But thats independent of the still very much open question about whether J2EE is itself more productive than other combinations.

We (C24) are definitely not J2EE centric, we pondered the question a few years ago and decided to remain "open" when it comes to Java based frameworks. Thankfully we stayed clear of EJBs and I've never liked the idea of selling pets on the internet anyway. :-)

We see a considerable gains in productivity using MDA even for the simplest of projects, the model is something almost everyone understands (business to techies) and so MDA also integrates well with an Agile development, i.e. lots of communication and feedback.
We're obviously developing our own MDA tool set but we're concentrating on the needs of the financial services world (banks), predominately messaging formats, integration and high performance.

We have almost finished V3, a newly refactored version of our earlier SWIFT/FIX MDA editor. The new version will fully support XML Schema e.g. FpML (working already), JDO (in progress) and Jini's JavaSpaces. I see J2EE as being "just another" set of tools, it already works with JMS and EJBs so I suppose we support them, it's just that no one wants to use EJBs these days, not for real work anyway.

Spring and pico container aren't complementary.
and Prevayler ??? you have to be kidding me, as I uderstand it prevayler basicly keeps all object in memory and at set times uses serialisation to write the whole bunch to disk. If there'd be a blackout you'd lose all your orders the last time it was written to disk

Ok I misunderstood that there also was a transaction log, still imho using prevayler wouldn't exactly be comparing apples to apples.

Would you care to elaborate on how pico container and spring are complimentary, Admitedly I haven't had the chance to use pico container yet though I have read a fair bit about it so I think I understand what it does.

As far as I know both pico container and Spring provide a mechanism to strap objects together Since spring is rather modular I am sure you could leave the IOC part out and use pico container instead but that seems somewhat pointless. Since both offer just about the same functionality

You are right Spring offers constructor dependency injection, but it did not when I started my project, so I replaced Spring's BeanFactory with a Pico backed factory. I believe I will stay with the Pico BeanFactory, because I like the Pico API.

You've misunderstood Prevayler. You are correct that it keeps everything in memory and writes out (snapshots) on set times. What you haven't understood yet is that it also writes down each transaction in a transaction log before it executes the transaction. When restarting after a "blackout" it reads back the latest snapshot and replays the transaction log. No data lost.

is that it also writes down each transaction in a transaction log before it >executes the transaction. When restarting after a "blackout" it reads back the >latest snapshot and replays the transaction log. No data lost.

and so, why it is (as shown by the Prevayler site) faster than other databases? Other databases also do data caching in memory.

Very interesting. I have to say that I have not paid much attention to MDA, but maybe it is now time to do so. I too have been using a 'pattern oriented' mode of development for a few years, and more than once had to refactor patterns from one language to the next. I think that MDA certainly hits the 'nail on the head' by promoting them into an independent higher level of abstraction (models), with the idea of pluggable implementations.

btw I read that the MDA team used OptimalJ. What would be considered as a shortlist of good MDA tools? I also read that the learning curve was steep for the MDA team. Are there any good tutorials around for MDA development? Books?

Regarding books on MDA, there are several out there. MDA Explained by Anneke Kleppe et al and Model Driven Architecture: Applying MDA to Enterprise Computing by David S. Frankel both cover the theory, rationale and mechanics of MDA. These books will help you understand the concepts, but not how to use a specific tool. The first is a quicker read; the second covers MDA in greater depth.

btw I read that the MDA team used OptimalJ. What would be considered as a shortlist of good MDA tools? I also read that the learning curve was steep for the MDA team. Are there any good tutorials around for MDA development? Books?

>

I heard they used Rapid Developer from Rational or IBM or whoever they are now.

I also heard they used JBuilder 9 or X as the traditional environment.

btw I read that the MDA team used OptimalJ. What would be considered as a shortlist of good MDA tools? I also read that the learning curve was steep for the MDA team. Are there any good tutorials around for MDA development? Books?

> >
>
> I heard they used Rapid Developer from Rational or IBM or whoever they are now.
>
> I also heard they used JBuilder 9 or X as the traditional environment.
>
> D.

My bad - that was the previous report on productivity. I just read this one and they did use OptimalJ.

For enterprise software the industry's ratio of boilerplate/buisiness logic is climbing. The number of *layers* of generated code is also growing. Eg, I use Apache Axis to generate WSDL from my POJOs; and then I use it to generate stubs from the generated WSDL; and then I use other tools to generate class files from the generated stub sources, jars from the class files, and then compound jars. Maybe I also use JAXB to generate document marshallers. So many layers to tame.

First off, I'll say that I personally believe that TMC tried very hard to do a fair and balanced study. But vendor-sponsored research is very slanted by nature, and I think that TMC/TSS has been disturbingly sloppy in their disclosures.

I appreciate that pages 7 and 24 of the report identify this as a CompuWare-sponsored study, but I have a real problem with the fact that neither the Press Release nor the TSS story make any mention of this. The sponsorship is very important contextual information.

we allow [sponsors like CompuWare] the option to prevent us from publishing the report if they feel it would result in harmful publicity.

Exactly. It doesn't really matter how unbiased the lab was, because we have no way of knowing whether TMC already ran 0 or 4 or 23 studies that showed MDA was harder to refactor and maintain than other approaches. When we only are allowed to see the first report that has results sufficiently favorable for CompuWare to release, we are seeing this information completely out of context, and numbers like 37% are very meaningless.

We refuse to be influenced by the sponsor in the writing of this report. Sponsorship fees are not contingent upon the results.

No, but revenue-generating follow up work clearly is contingent upon the results. Remember the TMC study on using MDA to build a new application? We don't know how many studies they conducted, but suppose they had not completed a single study that had findings favorable to CompuWare. Do you think CompuWare would have commissioned TMC to perform a follow-up study on maintainability?

The answer is....

No.

Even smart people who are trying very hard to be objective are influenced by that kind of financial pressure. Maybe not every individual engineer, but someone in charge of the study is.

I am not saying that TMC should not conduct vendor-sponsored research. And I am not saying that TMC is intending to sell slanted studies to the highest bidder.

But I do believe that TMC is at best careless, and at worse misleading, when they downplay the fact that the research is vendor-sponsored and protest that sponsorship has no effect on publically visible results.

I appreciate you taking the time to write up your concerns about our research. Let me take a couple minutes to address some of your concerns:

1) You would be happy to know that we are working on a 'Research Code of Conduct' that all future research would strictly adhere to. This code would be publicly available and published in every research report that is produced. The code will incorporate as many processes and checks / balances that we can to give the research the highest level of credibility.

2) Based upon feedback from the community, we are drafting a new research report format that includes disclosures and the code of conduct on page 1 immediately after the title page and before the Table of Contents. We have no intentions of hiding any of these disclosures and hope that by standardizing on their location, people will not have to wonder or sift through the information. We had not considered whether the disclosure should be in the press release or not since we've been so focused on the reorganization of the research and the Code of Conduct. Our thoughts today have been around how we can make the code and disclosures much more prominent, but we'll take your recommendation into consideration.

3) We separate the sale of sponsored research apart from the team that implements these projects. Additionally, our greatest asset is our credibility and the excellence of the people that work for us. To willingly slant research, or a report in any way (even if subtle), in the hope of achieving follow on business is something we have never and will never consider. It may seem like an option, but it's not knowing that every sale we make is based upon our credibility and honesty.

4) We can fully disclose that we have never had a vendor exercise their "do not publish" option. And, we've never been asked to do research over or in a different format because the vendor was not satisfied with the first outcome. We have had results that vendors have not expected, but no one has requested that the results be altered or that the results not be published.

For appropriate scenarios it would be interesting to provide vendor challenges where opposing vendors would co-sponsor these studies each equally funding the unbiased TMC. This would be a gamble but would seemingly be really good marketing hype for the winner too.

I've been watching the MDA approach for a few months now w/ great interest and this article finally motivated me to give it a try w/ a small application. I attempted using Sparx System's Enterprise Architect Corp. Edition (30 day eval).

I wanted to evaluate MDA based development using a couple of different approaches, but the first one I tried was to import an existing appliation (reverse engineer), make some changes to classes (attributes, methods, relationships, etc), and use the app's code generation utilities to spit the code back out (forward engineer). My primary concerns for MDA dev were as follows:

* Accuracy of the UML diagrams w/ respect to my actual existing code

* Quality of the generated code

* Speed and ease at which I can manipulate diagrams (add attributes, methods, etc)

* Overall tool usability

I understand that the results are very tool dependent (and I'm sure one or more of my gripes are due to my inexperience w/ the tool), but still...after this small, personal test I think I grudgingly agree w/ Martin Fowler. Here are my gripes about the whole process:

* Generating diagrams based on existing code seemed *ok*. The way EA (Enterprise Architect) did it still seemed a little hokey. EA provided the option to generate a single diagram for each package (and I can see where that'd be useful). If you disabled the option, I guessed that all the imported classes would be drawn in a single diagram. I guessed wrong. If I disabled the option, I couldn't find a generated diagram ANYWHERE.

* Once I did get to a point where I could start manipulating a UML diagram, it seemed to be WAY too slow/cumbersome of a process to simply add an attribute to a class. In the time it took to add a simple private String attribute (complete w/ getters/setters), I could've written done the same thing 20 times over in IntelliJ. I would've loved to be able to do something like Ctrl+N to create automatically create a new class in a diagram, Ctrl+A to add an attribute directly inside the GUI representation of the class (as opposed to a dialog Window). Instead I have to do an endless series of point-click, point-click, point-click....

* There doesn't seem to be good monitoring of the code. For instance, if I changed something in IntelliJ (outside of EA), EA didn't recognize it. I'd have to synchronise the diagram manually by re-importing the class(s).

* EA specifically is very much too mouse driven. As a developer and coming from an IDE like IntelliJ where I can do most of my work from only the keyboard using a myriad of shortcuts, I found it impossible to do any work in EA w/o the mouse. Even when I knew what the shortcut for something was...I still couldn't use it, because the window focus was on some other panel.

* It'd be nice if there were a way to do the equivalent of "code folding" in the UML diagrams. For instance, for a simple JavaBean of a domain object that's a bunch of attributes and dumb getters/setters, I personally don't need to see those getters/setters - only the attributes and their data types. That's enough info for me...it'd be nice to customize what things to auto hide.

Again, I realize that many of my problems can be attributed to my inexperience in MDA development, the tool, or my inexperience w/ the tool. If anyone can make any recommendations to a better tool or a better approach, I'm all ears!

It'd be nice if there were a way to do the equivalent of "code folding" in the UML diagrams.

UML 2 Infrastucture hopes so:

"Propose a complexity management strategy that is based on controlling the visibility of objects and processes by three abstraction-refinement mechanisms: process in-zooming and out-zooming, object unfolding and folding, and state expression and suppression."
-- http://www.omg.org/docs/ad/02-05-08.pdf

I've been watching the MDA approach for a few months now w/ great interest and this article finally motivated me to give it a try w/ a small application. I attempted using Sparx System's Enterprise Architect Corp. Edition (30 day eval).

The guys from SparxSystems can correct me if I'm wrong, but I don't think they would put Enterprise Architect into the MDA category. Enterprise Architect is a modeling tool.

This brings up an excellent point, however. There is a lot of confusion in the marketplace about what MDA is, and how MDA tools are different than other tools that exist on the market.

People tend to have a couple of buckets in their mind that they are comfortable with. One of those is "modeling" tools, and another is "IDEs". MDA tools like OptimalJ, however, are a new class of tools that are neither a modeling tool nor an IDE. Instead, they sit between modeling tools and IDEs and accelerate the development of pattern-based applicaitons by generating code from models (e.g. OptimalJ integrates with SparxSystem's EA). We call this class of tools "model-driven, pattern-based". MDPB tools are model-driven because models are the input to the code generation process; they are pattern-based because they codify design patterns and pattern frameworks into the transformation engine that the tools use to translate the model into code.

Most modeling tools today (like SparxSystems EA) generate code from models. So what's the difference with MDPB tools? The difference is in the relationship between the model and code. Modeling tools typically generate and synchronize one UML class with one Java class. As such, the model becomes another view of the code. This is code visualization. While this can be valuable in some aspects, as you found, it doesn't make you any more productive.

Where the real productivty comes is from generating many artifacts from one UML class. For instance, from a "Customer" class defined in UML, I should be able to generate:
- The appropriate DDL to create, delete, and init the table in a RDBMS
- A DAO or EJB data access layer (or Hibernate, or JDO, or Spring...)
- A SessionFacade to access the bean
- A ServiceLocator to find the bean
- A set of DTOs to pass to the web tier
- A Struts-based framework to perfrom CRUD ops on a "Customer"
- All the security, logging, exception and other f/w hooks that I would expect to use

Using this approach, as you might guess, saves a lot of time. Further, as this study shows, after the app is built with MDPB, it is faster to make changes to the app as well.

A couple of additional points about MDA that distinguish it from other modeling approaches:

First, MDA is able to do what Mike Burba describes (generate many artifacts from each model element) because the model is multi-layered. The model layers represent levels of abstraction. At the highest level, the "Customer" is a pure business entity, unrelated to any platform or technology. The next level down has the platform specifics, but it is still meta data. At the lowest level is the actual generated (and custom-written) code. In effect, MDA loosens the coupling between model and code, allowing for much greater flexibility in modeling and in code generation.

Second, it's all based on a set of standards, which means your meta data isn't rigidly tied to a specific MDA product.

I attended an MDA presentation with the CEO of the OMG a couple years ago. The primary purpose is to decouple your business domain with the platform. OCL is the main vehicle in doing this so any UML produced has to be extremely detailed.

I have to say I'm not convinced by white room studies. The reality in the field will be interesting to see. Ever changing requirements, bugfixes on bugfixes, branches on branches - rewrites, redesigns - 10s or 100s of thousands of lines of code written by teams of developers. This is a real test of any tool or methodology. I've found very few IDEs that can survive this type of environment so far let alone development tools.

I wait with anticipation; the promise of tools that remove large chunks of coding have been around for so long now. So far only two things have done this well: better languages and better APIs. I love tools like Intellij (Java IDE) to bits because they _don't_ code for me they just make the process a lot less painful.

I also hope these tools work because we'd all like to see large scale projects becoming less traumatic ;-) and maybe a little easier/quicker to produce.

Don't get me wrong code generation is great as a workaround to limitiations in APIs and langauges (Middlegen is a fantastic example) - let's see how it does as a replacement in this new incarnation 'MDA'.

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.