Why did MDE Miss the Boat?

The activity and interest focused on programming languages seem to be reaching new highs every month. Whether it is the rise of Javascript or Python, the launch of Dart, or the rebirth of Objective-C, and the specification of "executable UML", it seems that tthe industry is ready for a new programming language. There is even an effort to create a language, Grace, that will make it easier to teach programming. InfoQ attended the COOMP workshop this week, which part of the SPLASH 2011 conference.

The workshop aimed at understanding why "Languages for modeling and programming are diverging". The organizers of the workshop noted in their introduction:

On one hand developers who would like to apply OO design to obtain a suitable model end up with the challenge of maintaining both model and program artifacts. And, since many modeling languages are at the same level of abstraction as programming languages, there is little benefit to using a separate modeling language.

On the other hand we see that much OO code is written by developers with little appreciation of OO design and development disciplines, leading to complex code that is difficult to understand and maintain as the concepts and phenomena of the application domain are not properly reflected in the code.

During the workshop, Jean Bezivin presented his views on "Why MDE missed the boat?". He argued that if we measure success in terms of adoption by the industry and the start of large modeling projects, MDE has reached a standstill. He does not think this is the end of MDE: he showed that even though Object-Orientation can be traced to SIMULA which was invented in the late 60s, it is not until 1986, with the first OOPSLA conference, that OO reached main stream interest and adoption, 3 years after the first OO killer app was produced by Tom Love, when he replaced 10,000 lines of code by 220 lines of Object-Oriented code.

Jean notes several reasons for MDE's missed opportunity:

There has not been any killer application that produces measurable and reproducible evidence that MDE provides at least an order of magnitude improvement over previous solutions

There are too many definitions and camps (MDD, MDSD, MDA, MDE, MDSE, MBD,...)

There is still a lot of confusion about what MDE is, for instance, between Simulation and MDE, two very different branches of software engineering

UML, which is often use as a foundation to modeling, is a loosely defined langugage, built by industrial consensus, with poor modularity principles, too big, too complex, changes too often,...

There is a lack of focus: if MDE is the solution, then what is the problem?

Execution vs Precision: precision is not always obtained through execution, and MDE does not always aim at creating executable models

There is a confusion between programming and modeling that emerged in the 90s when visual programming languages appeared

MDE is often perceived as adding complexity: Metamodels are often too large, there are too many, and their relationships and alignment are poorly understood

MDE lacks modularity, which was not solved by the introduction of packages and profiles in UML

MDE lacks portability: there are no serious results on portability of models in time and space (a model produced 10 years ago, may not be exploitable today)

XMI was a failure, it will eventually disappear, creating a maintenance problem for UML

The notion of a platform model was never taken seriously, and the CIM/PIM/PSM was a false good idea

MDE focused too much on the model of code, and not enough on the model of data

MDE focused too much on solution models and not enough on problem models

MDE focused too much on Information System models and not enough on Business Models

MDE focused too much on modeling in the small and not enough in the large

Jean summarized:

The MDE temporary failure is mainly due to

UML, the main problem

Confusing modeling and programming languages

No definition / evaluation of executable modeling

Too many camps

Jean also argued that we learned a lot in the last 20 years:

We went from a "unified method" to a unified language.

MDA clearly established the notion and the importance of models.

MDE is the best solution for the separation and integration of various aspects of information systems with the help of typed graphs

DSLs are ubiquitous

MDE applied to many IT fields not just software engineering: data engineering, system engineering, business engineering

Jean noted that with the emergence of mobile devices, the number of application end-users is growing rapidly and the number of developers will not be able to match the need for building applications in the coming years.

In his conclusion he argued that between the Semantic Web and the nenewed compiling techniques, MDE has disappeared. Only traces remain: DSLs (based on grammarware and XMLware technologies) and UML for blueprints and documentation.

During his presentation he raised also an important question: with for instance Javascript, Dart and GWT, are we starting to build programming languages that are not necessarily languages you program into, but languages that you generate to?

Do you agree with Jean? Where do you see the future of software engineering? deeply embedded in the new kids on the programming language block? or is a generation of model driven engineering about to emerge and change software engineering was we know it?

I think the question is misleading on its own, and for the audience at large, it hurts more than it helps MDE. "Missing the boat" usually means "a lost one-time opportunity", leading people to think, "nothing to see here, move along".

I do agree that there is a lot of "mind-changing" evangelism work to do regarding the suitability and profitability of MDE as a software engineering approach. MDE's future is still open, only uncertain about its direction.

I also agree that a "killer MDE app" is missing, because that could show developers and managers that MDE works in practice, and not just theoretically. However, developer population are, IMHO, composed of 90% "mere developer mortals", and 10% "high-minded, real software engineers". MDA would theoretically work fine with those 10%, but the other 90% look for simplicity, and just to get the job done.

Being a MDE practitioner and inventor myself (see the ABSE approach), I think model-driven approaches will go through the natural selection process, evolving with the survival of the fittest, just like what happens with programming languages.

Unlike MDA, ABSE targets the 90%, not trying to be perfect, but just "good enough" to fit everyone's needs. Other approaches from new, fresh minds are surfacing too, so I predict strong movements and evolutions in the MDE arena in the following years.

One remark at the Stephen Mellor article and the 26 years. I think the riddle is solved if we start recognizing MDE as a collection of many technologies, the hype cycle applying to each of them sparately. Add these curves, and you get the 26 years. (which easily leads over to a meta-hype-cycle theory comparable to business cycles, but,well, we better leave that to others...)

You got a point here! Perhaps not your original intention, but you made me realize that, from a "MDE outsider", this article might not make much sense. Are we, MDE practitioners, too much close-minded? It seems we need to leave our tin foil hats for a while...

As a MDE outsider, my first question is how do you test a model? If you want to build your software incrementally and you want to do it in collaboration with your business/client, then you need some working software to facilitate the communication. What does a model do except add complication?

Of course if you have a mountain of reqs and you want them implemented in one big bang, then modelling could help you get off to a flying start. I have never seen modeling used in conjunction with test first weekly showcased working software.

If I define it as - using formal languages with a graphical or textual concrete syntax on a higher level of abstraction then the level of general purpose languages like C#, Java, C/C++, or even Cobol; -transforming these models into - code - tests - other models - configurations - documentation - ...- using powerful tools to ensure consistency, automate all the transformations etc.

then I say: We are doing MDE since decades, but we improved a lot on the tools (and the tools to build tools like www.eclipse.org/Xtext/).

On the other hand, there are enough drivers, that lead to the outlook, that MDE *has* to be used in the future.

To handle complexity in embedded systems development for example, MDE is the only way out. Software in Electronic Control Units (ECUs) for cars, aeroplanes or rockets are already developed that way every day.

So maybe it's more a question about the complexity level, that has to be reached, before you get into a dead end without any instruments other then GPLs to handle this complexity.

you assume, that models are only used for design or documentation purposes. If you also use them to create software automatically out of them, MDE supports for sure test first weekly showcased working software. We do that in each of our projects. If you do so, models help you to work on the right level of abstraction and keep technical details "behind the scenes". The technical mapping will be done in the generation step. Combined with continouus integration and automatic testing it is supporting iterative and agile approaches in a very appropriate way.

actually, this is why technologies like Xtext are a game changer in MDE, they allow you to keep changing the model (and underlying metamodel) without too much effort or uncertainty. This where other MDE tools failed in the past, changing the model, the metadata editor or the interpreter / code generator was a major undertaking (think developing a UML tool). They tried to deliver an all encompassing metamodel, which quickly showed its limit and was not extensible enough. Xtext makes it possible for a single person or a very small team of developers to undertake this kind of constant evolution or "Just in time Modeling" (JIM).

That being said, I would argue that there is an underlying model/metamodel to your code, no mater what. Sure enough, code allows you to "bend" that model/metamodel as needed (or serendipitously, as developers cannot all code to that underlying model/metmodel), but at what price?

I would not be surprised as more people discover technologies Xtext or MPS that they actually adopt a new coding approach where the model is explicit and the "code" is in the code generator. The model may not be reusable as such for other project, since ultimately it might contains idioms and dedicated code generation, specific to the application being built, but it might prove extremely efficient and maintainable as underlying technologies and APIs evolves, the model (and underlying metamodel) itself should remain constant.

I think the question is misleading on its own, and for the audience at large, it hurts more than it helps MDE. "Missing the boat" usually means "a lost one-time opportunity", leading people to think, "nothing to see here, move along".

I agree but, in my understanding, the ever-changing hot spoots of Internet-related technology occupied too many resources, especially recently, the cloud, mobile applications etc., with the tremendous changes of the infrastructure, environment and architecture for applications, which attracted nearly the most attention of people. I do not know if Jean Bezivin sighed "MDE missed the boat" under such the sense?