One complaint I hear about models is that they're abstract and therefore not useful. My initial response is "Duh!" to the first but "Huh?" to the second.

Humans cogitate almost exclusively using abstractions, hence the initial response. An abstraction focuses on some properties of an object (not using the term in the OO sense) at the exclusion of others. When we want to reason about some characteristic of a thing - such as its behavior under certain circumstances - we can ignore other aspects as irrelevant to that question, even though those omitted properties may be very relevant to reasoning about other qualities of the object.

Take a chair. If I'm designing the chair, I really need to focus on the parts, how they connect, their structural strength, the stabilty of the base, and so on. If I'm trying to put the chair together, I need the parts list and the order in which those pieces must be assembled. If I'm an interior designer, I care about the chair's style and color. If I'm trying to arrange seating at an event, I care about the physical space it takes up and its capacity. Which model represents the chair?

The answer, of course, is that they all do. Each focuses on some aspects of the chair and elides detail not relevant to the reasoning required of the model.

This does not mean that models, and the abstractions underlying them, are either imprecise or useless. We can, and should, build models for systems and software that are both precise enough and broad enough to support the necessary reasoning. Sometimes, this entails detailed state behavioral modeling; other cases might result in mathematically precise models using languages/tools such as Simulink. In other cases, we'll build detailed architectural structure models to understand what the large scale pieces of the system are and how they connect.

To be generally helpful in systems and software modeling, the models rmust represent precise details about the system but not necessarily all the available detail. For example, I might model a device driver with UML. I precisely specify the structure in a UML class diagram, identifying the set of relevant attributes, their types and subranges, as well as the services that manipulate those attributes. I might also specify the order of the execution of the services within a state machine as the driver responds to various environmental events. I can then generate code using tools such as Rational Rhapsody(tm) and download it to the target, execute it, and visualize that execution using Rhapsody's execution and animation environment. I might very well have elided details about unrelated structural aspects such as links to an event logger implemented as an observer or attributes not related to the device driver manipulation of the hardware. I certainly have omitted specifying the source level language constructs managing the state machine execution. I have certainly not specified which CPU registers should be used. Those details aren't part of the model, but that doesn't mean that the model was imprecise with respect to the behavior and structure relevant to my viewpoint.

There are people who think that source code is the ultimate answer and modeling should - at most - be a very high level vague notional view, scribbled on a napkin and then discarded. I am not among them. I have seen tremendous benefits in the application of models precise enough to support execution/simulation and code generation.

Over the course of an entire project, 10-20 source lines of code (SLOCs) is the norm for software productivity. I understand that when you're actually sitting in front of your computer wielding vi (or Emacs - my wife and I argue about which is best!) to write code, your productivity is higher. Nevertheless, over the lifespan of an entire project, 10-20 SLOCs/day per programmer is typical. Interestingly, this seems to be independent of the abstraction level of the language. This is why source level languages smacked assembly language over the head. You can do a lot more with a line of high level source language than with a single assembly language statement. 10-20 lines of a high level source language might result in between 100 to 300 lines of assembly language. Similarly - properly applied - UML modeling can have a similar magnification of productivity when applied to source level languages. I've seen 200-300 SLOCS/day per programmer for effective modeling teams. It's all about the maturity of the organization with respect to its use of modeling.

I've created a model of modeling maturity for software development teams called the UML Modeling Maturity Index (UMMI), shown below:

The percent benefit is (informally) derived from the observation about the productivity of hundreds of teams using UML and/or SysML with varying degrees of success. As expected, the more mature the modeling an organization applies, the more they benefit from modeling.

To do precise modeling, be sure to specify that is the purpose of the model, and of each diagram. This is the "mission statement" for the diagrams that I talked about in my last blog "Forget 7 +/- 2". Provide all the detail relevant to the purpose of the model and the diagrams. If that purpose is to develop running software, then you'll need to specify the structural elements (classes with attributes and services for OO designs, functions, variables, and data types for structured designs, along with structural relations), state based behavior (with state diagrams) and algorithmic behavior (with activity diagrams or flow charts). This precise modeling of the software structure and behavior takes less effort that writing the equivalent source code and the source code can be generated from it. Further, you get the benefit of automatic design documentation because you know the design represents the actual shipping code.

One of the key technologies that I preach (I am, after all, Chief Evangelist of IBM Rational) is agile methods. Since I’m focused in the systems space, my concerns are how to apply the principles and practices of agile to embedded software development, systems engineering, and systems of systems management. I have a special focus also on safety and security of these cyber- physical systems areas of concern I consider crucial for system success.

There are people who claim that agile cannot be applied to such systems because agile means there is no planning and no tracking; to them, “being agile” is just a license to hack away until they are tired, at which point, presumably, they are done. From my point of view, these people are clearly misguided, as well as from the point of view of other agile theorists.

Scott Ambler, talks about “disciplined agile” and a lot of his focus is on scaling agile methods up from small co-located project teams to entire organizations. In this view, agile is a very disciplined approach to development and requires planning and some rigor to succeed. I strongly believe Scott is right and it is this rigor that I bring when I apply agile methods to the systems space. A key part of this rigor is monitoring your success and modifying your approach to improve it.

On the other side, we have people who claim that the Harmony/ESW process (see my book “Real-Time Agility” for more) isn’t agile because it doesn’t rigorously follow their favorite agile dogma. I find this argument – that you should dogmatically apply an approach that emphasizes flexibility and uses feedback to change and improve its effectiveness – amusing, to say the least. It’s like “Jumbo Shrimp”, “a little pregnant” or “acute apathy.” An oxymoron is a statement whose inherent assumptions invalidate its own premise.

Agile methods really focus on activities that shorten the “time to value” – such as continuous execution, test-driven development, incremental development, and continuous integration. It does require planning as well, but a difference is that Agilistas understand that a plan is a map drawn for a geography that you’ve heard about but not yet visited. Such plans are incorrect in a variety of ways even if they are mostly ok.

A couple of weeks ago I was in Japan visiting a major electronics company wanting to use agile methods for their consumer products. They were very concerned about how you can plan products development when agile methods proceed without planning. They were under the impression that agile methods mean that there can be no plan, developers just hack away until they think they’re done. They were concerned about coordinating the efforts of hundreds of engineers to produce a single product using approaches that were only valid for small co-located teams. Again, they misunderstood the heart of agility – act where you can add value and do what demonstrably works.

People move to agile methods first and foremost to improve quality and secondly to improve their productivity. Mostly, it means that you continue to do the activities that you do in a more traditional process but you do them in a slightly different way and perhaps at a different time. For example, most traditional environments do have a unit testing activity that is performed near the end of the development cycle. The Test Driven Development (TDD) practices change when you do that (at the same time you develop the software, that is, daily) and a bit about how you do that (incrementally rather than all at once). The task is still largely the same but by rearranging how you perform the task, your time to value is greatly shortened.

Nevertheless, when we start working on a project, we have grand ideas about how the project will unfold. If there is one take home message I’ve gotten from over 35 years developing systems, it’s that the plan is wrong, in some detail great or small. So we adjust our plans based on success metrics, such as defect density or project velocity. And based on that feedback, we change what we do. Rigorously adhering to a workflow that direct evidence suggests is suboptimal is not only silly, it violates the basic premise of agile – “do what works.” Rigorous Agile is an oxymoron if I’ve ever heard of one.