We have one obsession: stopping people from writing so much code

Menu

The value of “programmer tests” is far from being a consensus. I am not even talking about Test-Driven Development here, just the practice of programmers building up a suite of automated tests that validate a piece of software that they are working on.

By the way, I am totally on the programmer test bandwagon, have been since I read the “Test infected” article by Beck and Gamma back in 2001. When I see luminaries in the software development field dissing programmer tests, I am totally baffled.

Let me try an analogy. I wonder if they would also question the value of a good set of test fixtures for a language implementation (be it a compiler, interpreter, virtual machine, what have you).

…when I see some household names in the software development field dissing programmer tests, I am totally baffled.

Anyone that has ever set eyes on the codebase for a compiler or interpreter for any programming language must have noticed a large set of test programs that the compiler development team maintains to ensure the compiler is correct, exploring how the compiler or interpreter handles multiple variations of features in the source language. Often the tests not only look for pass/fail outcomes, but inspect the result to ensure the expected output was produced. This is not a modern practice – I am going to guess that it has been the common practice since compilers existed (that is more than 60 years now), because it is just too obvious and technologically trivial to have been missed even in the first years of Fortran.

The way I (and many others) see it, any program for any domain is implementing a language. This language is made of concepts that exist in the domain at hand, and defines the valid ways you can deal with those concepts. For instance, for an ecommerce application, you have products, and a shopping cart. You can add products to a shopping cart, you can change the quantity for a specific item or remove it, you can empty the shopping cart, you can proceed to checkout (if your cart is not empty). That is a language, and the application implements that language, just like a compiler, or interpreter.

In other words, there is no fundamental difference in testing compilers and business applications.

Automated tests are analogous to test programs that you submit to a compiler or interpreter to test it. You play with some feature and see if it produces the expected results. You try variations that you suspect could be tricky for the language implementation to handle. You look for interactions between features that you suspect may result in some weirdness. You do that for every feature you can think of, from the most important, to the most trivial, and you can only be sure the language implementation is complete when, for every language feature, you have at least one instance of a test input that exposes that feature.

What you don’t do is tell your QA teammates to take your compiler for a spin and see if it works, or ship and hope your users will do it, and wait for bug reports for the many silly mistakes you will for sure have made.

If you are a responsible developer, you just don’t ship code to the poor users if you haven’t yourself submitted the code to a decent battery of test cases that ensure that, at least in the situations you and your team could think of, the program being tested performs as expected.

Also, writing programmer tests is a sign of empathy towards your fellow developers. When you add test coverage for a feature you implemented, you are also providing a safety net for your fellow developer teammates, which will be making changes to the codebase. By having a solid test suite, they don’t have to manually ensure not only the feature they are working on is correct, but that they haven’t broken any other feature ever implemented by the team.

Of course, programmer tests are not a guarantee that there will never be bugs lurking somewhere in the codebase being tested. Even the most used compilers for programming languages have bugs found by their users. You may still want QA in the team. And you will still get bugs reported by users. But QA, and of course, users, finding bugs should be the exception, and limited to non-obvious scenarios. [Of course, when you get a bug reported, you write a test case that reproduces that bug, fix the bug so that test - and all existing tests - now pass, and you just made your test suite a little better.]

We just released a new version of Cloudfier which sports a long requested feature (in experimental status): reverse engineering a relational database schema as a Cloudfier application. It relies on offline database schema snapshots produced by the SchemaCrawler tool.

Steps

Create a new folder (call it, say, ‘import-test’)

Select the folder you just created, right click it, and choose Import > File or Zip Archive, then pick a database snapshot file created using SchemaCrawler in your computer (feel free to download this example snapshot). When asked whether the file should be unzipped, CHOOSE “NO”.

Select the folder you just created, right click it, and choose Open Related > Shell

type cloudfier init-project .

type cloudfier app-deploy . so the contents of the project are published

type cloudfier import-schema-crawler . offline.db_.zip to import the SchemaCrawler snapshot as TextUML model (provide the proper file name if your snapshot file has a different name)

if you used the sample snapshot, delete the forlint.tuml file before you take the next step.

Cloudfier is an approach for building business applications, and since role-based access control (RBAC) is such an important thing for any business application, Cloudfier is bound to provide support for modeling what users can do to application objects. Here is our plan.

[There are several proposals by third-parties on how to do security with UML (remember, TextUML is UML), but the OMG itself has not officially adopted any so far. So, since there is no standard and no clear 3rd-party winner, I decided I might as well make up my own approach, tailor-made for the needs of business applications.]

The gist of the idea:

classes, attributes and operations may declare “access constraints”, which are UML constraints specialized on describing how accessible an element (in practice, classes, attributes and operations);

access constraints are defined for one or more user roles – one element may have multiple constraints, matching different roles;

the constraint specification may be an expression that (typically) will take the current logged-in user/actor in account in order to decide access should be allowed.

Profile changes

TextUML models, by default, can count on a UML profile called mdd_extensions. This profile defines several extensions to the base UML standard, for things such as allowing one to mark a class as a test class, elements as debuggable, activities as closures, blocks as initialization blocks, type casts, attribute derivations etc. The mdd_extensions profile was enhanced to support the idea of role classes and access constraints

Role classes

A role class is a class that represents a role an actor can play. Here is the definition of the stereotype:

(* A role class is a class that represents the role for a user. *)
stereotype Role extends UML::Class
end;

Access constraints

Access constraints are constraints with the “Access” stereotype applied to them. The stereotype allows setting:

the roles affected – user roles that are covered by this constraint;

the capabilities allowed – what users with one of the roles can do. Values are values of the AccessCapability enumeration.

The constraint specification, which is a boolean value specification (could be just a constant, or a complex expression), determines (in addition to the user roles) whether the constraint applies. See the source that defines those extensions:

For simplicity, I omitted details on the role classes, but they are normal classes otherwise and can have any attributes you may want, may specialize other classes etc.

But the use of the “allow” keyword to declare access constraints should be clear. If it isn’t to you, please provide feedback, here or on the project issue tracker.

Great, when can I use this?

TextUML Toolkit users: you should be able to use this if you update the plug-in in Eclipse now.

Cloudfier users: Cloudfier needs to honor the new access control features when running the model natively, or generating code, and some work is required before that happens, so you will need to wait a little longer to use this feature in your Cloudfier applications.

No matter which tool you use (or even if you don’t use any of them), if you have any opinion on the choices made, your feedback is really quite welcome.

It is been a while since I meant to have this implemented, but Cloudfier’s Expert4J (which if you don’t know is a gap-free code generator for JavaEE) finally got support for generating JPA queries that have subqueries.

For example, take this query in TextUML, which returns all clients that have at least one invoice that has yet to be submitted to the client:

The main challenge here was around generating the variable names when accessing the query roots. That required beefing up Expert4J with deeper data flow analysis. It was tough, and almost overwhelming (given the short windows I had to work on this between client work), but it is finally done, and I am proud of the result. You can see tests for this and other query-related code generation features in QueryActionGenerator.xtend.

I am no JPA expert, so if you are familiar with JPA and you have feedback on where the generated code could be improved, please let me know (and if you know people that grok JPA, please pass this post on to them). The entire code for the example Time Tracking and Invoicing application is available here.

Starting today, you can visualize your Cloudfier applications using the UML graphical notation, as both class and statechart diagrams. This feature was one of the most requested when I am presenting Cloudfier. People get that the textual notation (TextUML) is better for actually building the application, but diagrams are useful for someone new to the application to quickly get the gist of it.

How to show Cloudfier applications as UML diagrams

Since Cloudfier shows the diagram for the deployed version of the application, not what you are editing right now, you need to deploy the application first. In order to deploy, you can use the app-deploy or full-deploy commands. If the application is already deployed, you can use the cloudfier info command to just show the links to diagrams:

cloudfier info cloudfier-examples/car-rental

Then just click one of the links to class or statechart diagrams. This will open the diagram in a new tab. here is an example of a diagram URL:

The URL determines what package we are rendering and what elements should be rendered (via query parameters). By default, class diagrams are showing classes with attributes and operations (live class diagram example), and other classifiers, whereas statechart diagrams show only statemachines (live statechart diagram example). But feel free to to customize the URL to show exactly what you want to see (you can even mix statechart and class diagram elements in a single diagram). Here are all the supported options so far:

The first two subsystems are now licensed as EPL. The last two are licensed as AGPL.

The TextUML Toolkit and the Kirra API projects, which Cloudfier builds upon, are independent projects and have been open-source (EPL) since the beginning.

Why now?

The main driver behind the decision to open source Cloudfier was our new focus on code generation. We are working hard on building a production-level code generator for JavaEE (E4J), and would really like to encourage others, based on the foundations Cloudfier provides, to start building generators for any other target platforms they are interested in.

Back in March I attended my first EclipseCon. It was great finally meeting so many people from the Eclipse community I only talked to before via email/newsgroups/bugzilla/twitter/LinkedIn. Also, nice to see again some of the Genologics folks living in the Bay Area, and some of the OTI/IBM Ottawa peeps I worked with when I was an Eclipse committer back in the day.

I also presented a session: “Generating Business Applications from Executable Models“. Feedback was mostly positive, some good points about the delivery (rushed things unnecessarily at the end to finish on time, not knowing there was a good gap between sessions). Here are the slides:

This is the last installment to a series started about two weeks ago. If you remember, I set off to build a code generator that could produce 100% of the code for a business application targeting the MEAN (for Mongo, Express, Node.js and Angular) stack. I am writing this a day after the presentation took place.

So, the big day finally arrived. Unfortunately, the presentation was not one of my best. If my delivery was not that great, to make matters worse, turns out my idea of exposing MDD (with Javascript as target platform) to a crowd seeking wisdom on Javascript didn’t work very well, people didn’t seem be interested in modeling and code generation at all. Something to take into account in the future. Anyways, the slides (in Portuguese) appear below this post.

Also, the code generator was not complete (was anyone else surprised?), so I couldn’t show code running for many features, and instead had to show the generated code (that looked almost right but not quite there yet). I guess that contributed to a less interesting presentation.

On the bright side, a lot of progress on the code generator was made. Take a look at the latest state of the generated code. I am quite proud of what is there now. But there is still more progress to be made until at least the sample applications I have all translate to feature complete and correct MEAN applications.

This is another installment to a short series started about 10 days ago. If you remember, I am building a code generator that can produce 100% of the code for a business application targeting the MEAN (for Mongo, Express, Node.js and Angular) stack.

What happened since the last update

Since the previous post, a lot of progress has been made. Much of the code for a domain model class is being generated (including state machine support), as you can see below (bear in mind there are still quite a few gaps – but I still have 3 days!):

Xtend is still making the difference

What an awesome language for writing code generators Xtend is. See, for example, how simple is the code to implement state machine support in Javascript, including guard conditions on transitions, and automatically triggering of on-entry and on-exit behavior:

Next steps

Sorry if I am being a bit too terse. Will be glad to reply to any questions or comments, but time is running out fast and there is lots to be done still. For the next update, I hope to have Express route generation for the REST API and hopefully will have started on test suite generation. Let’s see how that goes!

This is another installment to a short series started last week. If you remember, we are building a code generator that can produce 100% of the code for a business application targeting the MEAN (for Mongo, Express, Node.js and Angular) stack. This is what happened since the previous post:

I decided to go ahead with UML (TextUML) as modeling language over building a dedicated (more business-oriented) modeling language using Xtext, for time reasons only.

I am still using the Xtend language for writing the generator, and what a difference it makes. As I apply Xtend in various code generation use cases, it becomes evident Xtend was designed as a language for writing code generators. I doubt there is any language out there that can beat it at traversing and making sense of input models, and rendering a textual output.

Progress: generating Mongoose domain schema/models

Since the last update, I made some progress on generating Mongoose schema and models.

The generator is basically traversing the UML activity that defines the behavior of the operation at hand and translating UML actions to the corresponding Javascript fragments. Note that the generator is quite naive at mapping from UML to Javascript code (1:1 right now). It is also only tested in the test case above. Expect that to improve in the next updates.

Code generation tip: use local variables to preserve intent in Xtend templates

First code generation tip I have is a simple one: use local variables to preserve intent in your Xtend templates via sensible naming. This is a fragment from the Mongoose code generator:

Notice how the schemaVar, modelVar and modelName variables (which represent Javascript local variables and expressions) help with making the template easier to make sense of. I am not defending a template should only have local var references as code; I actually prefer leaving delegation to other generation methods in the template, instead of being pre-invoked and results stored in local variables, to leave the structure of the template easier to understand. Also, note that even if two things happen to look the same, they may not have the same meaning, and as such may be better as different variables (case in point, modelVar and modelName).

This is it for today – for the next update I hope to have query generation covered.

In two weeks time, I will be presenting at the Javascript track of the The Developers’ Conference (or TDC), a regional developer-centric conference. The presentation is titled: “Developing Business Applications on the MEAN Stack Using Model-Driven Development”.

The goal is to show how, from a model that defines the entities, attributes, relationships, states, queries, and actions for some business domain, you can generate a complete (i.e. including behavior) Javascript application that runs on the MEAN stack (which stands for MongoDB, Express, AngularJS, and Node.js). One benefit is that you get to use the shiny new toy of the day and yet remain able to move on to different stacks in the future if (read: ‘when’) needed. Another is that you get clear separation between business and technology, whose virtues I extolled in the past.

The presentation is on October 16. That leaves me with nearly two weeks to prepare. TDC presentations are mostly around showing code, preferably running live, and that code generator won’t write itself, so today I am setting off to build it.

Current status – T-14 days

This is what I am starting from:

I have built a few code generators before, some that do 100% code generation

I have played with all the elements of the MEAN stack, but I am not an expert in any of them (nor should I need to be – just need to find some good examples to borrow from)

I have settled on using Xtend for the code generator (blows everything I used before out of the water, including Velocity, XSLT(!), StringTemplate and Groovy). I have a skeleton started here.

I am still deciding between two different approaches for modeling the application: using UML (via TextUML) or a new (mostly general purpose) language I have been working on (some progress here). Need to make up my mind soon.

Ok. So I am off to building this thing. Wish me luck, and if you feel like chiming in on MDD/Xtext/Xtend/UML or Mongo/Express/Angular/Javascript, please do. I will be posting on this blog with updates for the next two weeks, including a report after the presentation, which I hope will be a happy ending. I also intend to work in the open so you will be able to see my progress if you follow my activity on github.

What are the best language and frameworks for building business applications? And why?

The poll was promoted on Twitter, Google+, HackerNews, Reddit, DZone, a couple of language-agnostic forums on LinkedIn, and of course this blog (which is picked up by the Planet Eclipse aggregator),

There were 46 responses, which is not huge but is a decent sample size. I am no statistician, and I am sure the research has many flaws. But I think the data is still interesting.

Languages: Java against the rest of the world

I know Java is a pretty popular language, and even more so for business applications. But I did not expect to see the following picture:

Java got almost half of the votes (20 of 46). Five languages tied in second place, with only 3 votes each: C#, Ruby, Scala, Python, and of all things, C++.

C++ would seem a tad too low level for developing business applications, so it was a bit of a surprise. However, it is one of the oldest of the languages mentioned, it has a mature library/framework ecosystem, and it was the most popular language before Java became mainstream (considerng only those mentioned here, which does not include C), so I guess that explains it.

Another notable aspect is diversity: 12 languages divided the 26 votes that didn’t go to Java, with each language getting between 1 to 3 votes (which would be enough for a second place). This level of diversity was something I expected and actually hoped for. I tried to emphasize that this survey was about understanding what makes developers personally like in languages and frameworks, as if it was only up to them to choose, no previous experience being expected, and even if they had no hope of ever being able to use those languages in their jobs. But given the number of responses for Java with Spring (see below), which have been the status quo for 5-10 years, I think I could have done a better job at that.

Frameworks

The table below shows all framework responses, lumped together by language:

For languages other than Java, this is not statistically relevant, but I hope it gives interesting pointers for people looking to play with the less mainstream languages. Also, I am not sure what RoR is doing there on the Xtend line, but hey.

Since Java got the majority of the votes by a landslide, an analysis of the numbers for Java frameworks exclusively is warranted.

Spring’s dominance is impressive, but not totally unexpected. Spring is more of an umbrella brand, covering frameworks for all sorts of aspects of an application, so it is very likely any Java developer building a business app to be using at least one of the Spring frameworks.

Other than Java EE (which if lumped with EJB would have 4 votes), the remaining got one mention each. Again, definitely not statistically relevant, but if you are a Java developer, maybe take a look at those you had not heard of before.

What drives developers?

The chart below shows the most common reasons respondents provided for their choice of favorite language/framework:

Responses were broken into Java (20 respondents, in blue) and non-Java (26 respondents, in green) – just in case it would give some insight into how Java and non-Java developers may value things differently (I took interest into the “Community” and “Maturity” items). The overall averages would be in between those bars. Finally, note that Clojure, Groovy and Scala were counted as part of the Non-Java camp, even though they are JVM languages.

The following options appeared only once in the responses: “Robustness”, “Scalability”, “Want an excuse to learn it”, “Cloud deployment”, “Integration”.

I expected “Want an excuse to learn it” to be more popular – given how this survey tried to get people to take chances. Most of the motivations chosen seem to belong to the “professional software engineer” mindset, as opposed to the “passionate programmer” viewpoint I was hoping to encourage.

Note that except for “Cloud deployment”, “Ease of deployment” and “Integration” (the last two originally phrased in a more language-specific way), all motivations above were suggested options, that respondents could just check. In retrospect, one major omission in the suggestions was Portability. I would guess many would have picked it, as most of the languages chosen are historically available on multiple platforms.

Motivations per language

Finally, I will leave you with one last chart.This one gives you an overview of what things are appreciated on the basis of the language (and its frameworks):

There is little significance in this data though. For languages other than Java, the sample was too small.

What are your conclusions?

You are encouraged to take your own conclusions and post them on the comments section. I had fun doing this little exercise, and am thankful to all those who took the time to respond.

For me, my personal conclusion is that there are quite a few languages people have been using or looking forward to using. But it seems that Java will continue to be the safe choice for developing business applications for a long time. It would be interesting to see what a similar survey would show 5 years from now.

Finally, if you want to try to dig something else up, the raw responses (minus identifying info), consolidations and charts are available in a Google Sheet.

What do you do when you have a conference paper rejected? That has just happened to me. I could work on improving it according to some of the feedback I got, and resubmit it to another conference, but this paper was written for a “Tools Track” of a software engineering conference, and I would have a hard time trying to fit it elsewhere (at least here in Brazil, not planning to travel abroad at this time).

So if you want to take a look at the full paper, the PDF is freely available (download). It is basically an introduction to Cloudfier, what it is meant for, and a tour over the modeling capabilities. Comments are very welcome. Below is the abstract:

Information management applications have always taken up a significant portion of the software market. The primary value in this kind of software is in how well it encodes the knowledge of the business domain and helps streamline and automate business activities. However, much of the effort in developing such applications is wasted dealing with technological aspects, which in the grand scheme of things, are of little relevance.

Cloudfier is a model­-driven platform for development and deployment of information management applications that allows developers to focus on understanding the business domain and building a conceptual solution, and later apply an architecture automatically to produce a running application. The benefit is that applications become easier to develop and maintain, and inherently future­proof.

Imagine you were building the back-end for a brand new business application. You would need to address the domain information model (with entities, properties, associations, operations, queries, events etc), its persistence (on a relational database), and a REST API (to support integration with a UI and other clients). Consider the UI is somebody else’s job – and it is going to be built separately using other language/frameworks.

If it was completely up to you (not your client, or boss, or co-workers), what would be your language (one) and frameworks (any required) of choice for developing such application? Why?

These are literally the 3 questions asked in a poll I recently started. Please help by answering the poll and sharing this post (or upvoting it on the site you came from). Results will be published here.

The poll has been running for a few days and while it is still early, there have been already a quite diverse set of responses.

What is this guy up to?

I figured someone would ask.

It is always fascinating to me to read research that shows what makes developers tick. But I have a specific motivation for finding out what language/framework characteristics are more attractive to developers: to figure out what would be a good target for generating code (MDD-style) from high-level models. The hypothesis is that the more popular (or desired) the target platform, the more interest a code generator for that platform will draw.

What do NakedObjects, Apache Isis, Restful Objects, OpenXava, JMatter, Tynamo, Roma, and Cloudfier have in common?

These frameworks and platforms allow developers to focus on expressing all they know about a business domain in the form of a rich domain model. And they all support or enable automatically generating interfaces based on the application’s domain model that make the entire functionality of the application accessible to end users, without requiring any effort on designing a user interface. They can also often auto-generate a functional (usually REST) API for non-human actors.

However, each of those frameworks/platforms implement automatic UI or API generation independently, against their own proprietary metamodels – for each UI and API technology supported. So while Cloudfier supports a Qooxdoo client, Isis supports a Wicket viewer and a JQuery viewer, OpenXava seems to have a JQuery/DWR UI and so on.

This is the motivation for Kirra: Kirra aims to decouple the interface renderers from the technologies used for creating domain-driven applications, promoting the proliferation of high-quality generic UI and API renderers that can be used across domain-driven development frameworks, or even if your application is not built with a domain-driven framework.

But what is Kirra?

Kirra is a minimalistic language-independent API specification to expose functionality of a business application in a business and technology agnostic way.

Essentially, Kirra provides a simple model for exposing metadata and data for business applications, no matter how they were implemented, enabling generic clients that have full access to the functionality exposed by those applications.

Watch this space for more details and the first release, planned for later this month.

Ever heard of Command Query Separation? It was introduced by Bertrand Meyer and implemented in Eiffel. But I will let Martin Fowler explain:

The term ‘command query separation’ was coined by Bertrand Meyer in his book “Object Oriented Software Construction” – a book that is one of the most influential OO books during the early days of OO. [...]

The fundamental idea is that we should divide an object’s methods into two sharply separated categories:

Queries: Return a result and do not change the observable state of the system (are free of side effects).

Commands: Change the state of a system but do not return a value.

Query operations in UML

UML too allows an operation to be marked as a query. The section on Operations in the UML specification states:

If the isQuery property is true, an invocation of the Operation shall not modify the state of the instance or any other element in the model.

Query operations in TextUML

The next release of TextUML (which runs in Cloudfier today) will start exposing the ability to mark an operation as a query operation. Being just a notation for UML, the same definition of the UML spec applies to TextUML operations marked as queries.

But how do you mark an operation as a query in TextUML, you ask? You use the query keyword instead of the usual operation keyword (it is not just a modifier, it is a replacement for the usual keyword):

query totalExpenses(toSum : Expense[*]) : Double;

The TextUML compiler imposes a few rules when it sees a query operation:

it will require the operation to have a return value

it won’t let the operation perform any actions that could have side effects, such as creating or destroying objects, writing properties or linking objects, or invoke any other non-query operations

also, it will only let you invoke operations from a property derivation if they are query operations

But why is Command Query Separation a good thing?

By allowing a modeler/programmer to explicitly state whether an operation has side effects allows a compiler or runtime to take advantage of the guarantee of lack of side effects to do things such as reorder invocations, cache results, safely reissue in case of failure which can improve performance and reliability.

An upcoming feature in Cloudfier is the automatic generation of fully functional user interfaces that work well on both desktop:

and mobile browsers:

This is just a first stab, but is already available to any Cloudfier apps (like this one, try logging in as user: test@abstratt.com, senha: Test1234). Right now the mobile UI is read-only, and does not yet expose actions and relationships as the desktop-oriented web UI does. Watch this space for new developments on that.

The case against generated UIs

Cloudfier has always had support for automatic UI generation for desktop browsers (RIA). However, the generated UI had always been intended as a temporary artifact, to be used only when gathering initial feedback from users and while a handcrafted UI (that accesses the back-end functionality via the automatically generated REST API) is being developed (or in the long term, as a power-user UI). The reason is that automatically generated user-interfaces tend to suck, because they don’t recognize that not all entities/actions/properties have the same importance, and that their importance varies between user roles.

Don’t get me wrong, we strongly believe in the model-driven approach to build fully functional applications from a high-level description of the solution (executable domain models). While we think that is the most sane way of building an application’s database, business and API layers (and that those make up a major portion of the application functionality and development costs), we recognize user interfaces must follow constraints that are not properly represented in a domain model of an application: not all use cases have the same weight, and there is often benefit in adopting metaphors that closely mimic the real world (for example, an audio player application should mimic standard controls from physical audio players).

If model-driven development is to be used for generating user interfaces, the most appropriate approach for generating the implementation of such interfaces (and the interfaces only) would be to craft UI-oriented models using a UI modeling language, such as IFML (although I never tried it). But even if you don’t use a UI-oriented modeling tool, and you build the UI (and the UI only) using traditional construction tools (these days that would be Javascript and HTML/CSS) that connect to a back-end that is fully generated from executable domain models (like Cloudfier supports), you are still much but much better off than building and maintaining the whole thing the traditional way.

Enter mobile UIs

That being said, UIs on mobile devices are usually much simpler than corresponding desktop-based UIs because of the interaction, navigation and dimension constraints imposed by mobile devices, resulting in a UI that shows one application ‘screen’ at a time, with hierarchical navigation. So here is a hypothesis:

Hypothesis: Mobile UIs for line-of-business applications are inherently so much simpler than the corresponding desktop-based UIs, that it is conceivable that generated UIs for mobile devices may provide usability that is similar to manually crafted UIs for said devices.

What do you think? Do you agree that is a quest worth pursuing (and with some likelihood of being proven right)? Or is the answer somehow obvious to you already? Regardless, if you are interested or experienced in user experience and/or model-driven development, please chime in.

Meanwhile, we are setting off to test that hypothesis by building full support for automatically generated mobile UIs for Cloudfier applications. Future posts here will show the progress made as new features (such as actions, relationships and editing) are implemented.

which is a command contribution without a behavior. All the subcommands you see being offered actually include the prefix in their contributions.

Typical Cloudfier command

The typical Cloudfier command takes a workspace location (a file-type parameter), performs a remote operation and returns a message to the user explaining the outcome of the command (return type is String), and looks somewhat like this:

The behavior of the command is specified by the callback function. In this specific case, the callback performs a couple of HTTP requests against the server, so it it returns a dojo.Deferred which implements the Promise pattern contract used by Orion. Once the last server request is completed, it returns a string to be presented to the user with the outcome of the operation.

Note that the output of a command needs to use Markdown-style notation to produce links, HTML output is not suppported. Also, newlines are honored.

Commands that contribute content to the workspace

Commands that contribute content to the workspace use a “file” (single file) or “[file]” (multiple files) return type. Cloudfier has a few commands in this style:

An init-project command, which marks the current directory as a project directory:

And finally a db-snapshot command which grabs a snapshot of the current application database state and feeds it into a data.json file in the current application directory.

provider.registerServiceProvider("orion.shell.command", { callback: shellDBSnapshot }, {
name: "cloudfier db-snapshot",
description: "Fetches a snapshot of the current application's database and stores it in as a data.json file in the current directory",
returnType: "file"
});

That snapshot can be further edited and later pushed into the application database.

Note that for all file-generating commands, if files already exist (mdd.properties, <entity-name>.tuml, and data.json, respectively), they will be silently overwritten (bug 421349).

Readers beware

This ends our tour over how Cloudfier uses Orion extension points. Keep in mind this is not documentation.
See this wiki page for the most up-to-date documentation on the orion.shell.command extension point and this blog post by the Orion team for some interesting shell command examples.

Cloudfier now runs on Orion 4.0 RC2. It took some learning and patience and a few false starts (tried the same in the Orion 2.0 and 3.0 cycles), but I finally managed to port the Cloudfier Orion plug-in away from version 1.0 RC2 (shipped one year ago) to 4.0 RC2. Hopefully when 4.0 final is released (any time now?), it should be a no-brainer to integrate with it. I will only then look into hack/branding it a bit so it doesn’t look identical to a vanilla Orion instance.

But how does Cloudfier extend the Orion base feature set? This post will cover the editor-based features.

Content type

Cloudfier editor-based features are applicable for TextUML files only. This content type definition provides the reference for all features to be configured against.

Note the outliner API changed in 4.0 and the editor buffer contents is now available via a deferred instead of directly. Also, note that in order to use this API your plugin needs to load Deferred.js (see this orion-dev thread) as it implicitly turns your service into a long-running operation.

Source validation

Also a server-side functionality, which already returns a JSON tree in the format expected by the orion.edit.validator extension point.

Note the validation service uses a GET method, and only uses the file path, not the contents. The reason is that the server reaches into the project contents stored in the server instead the client contents (in order to perform multi-file validation).