> i respect the work here, so please don't take this quip as> being too snarky, but... it kills me that it is as> if functional programming never did and still doesn't> exist! sheesh?!>What part of DCI strikes you as sounding like functional programming?

> What part of DCI strikes you as sounding like functional> programming?

overall, there are several things about DCI that sounds like things already said elsewhere, and it is a bit frustrating that the copious # of words were not better used to help give a sense of how DCI relates to things people already know, be it functional programming (the general idea that algorithms matter and should be able to be clearly expressed somehow in the ascii of the source code vs. the shattering that has happened via bog-standard oop), or context-oriented programming.

(basically i think the article was written with some sort of anti-news-style approach, which was frustrating to me. i'd have liked it even more if it did the progressive refinement journalistic style instead.)

The closest thing to writing something like this in Java is using Annotations right? You can inject some behavior with annotations but maybe not as elegantly as with a trait in Scala. It was a nice introduction to traits.

> The closest thing to writing something like this in Java> is using Annotations right? You can inject some behavior> with annotations but maybe not as elegantly as with a> trait in Scala. It was a nice introduction to traits.>The Qi4J folks are doing trait-like things in Java via a framework, so you may want to take a look at that:

I guess we tried to make the point that DCI tries to reproduce the convenience of algorithmic expression that procedural languages used to give us. Trust me, both of us are old enough to have been through them :-) We gave FORTRAN as a concrete example. I myself enjoyed doing a lot of FORTRAN programming in the 1970s. As the article says, DCI brings us the algorithmic expressiveness of FORTRAN combined with many of the good domain modeling notions from 1980s object orientation.

I have great respect for the authors and I see the value in the style being promoted in DCI. I am struggling with the motivating example.

The authors were clear in demostrating that the traditional OO approach falls short in matching up to the user mental model (at least) in that an account ought not be smart enough to be resonsible for transactions and the like. But the example of extracting that behavior into a role that is then bound to the source account seems to fall sort as well.

Assigning the transfer role to the source account seems arbitrary to me. Why not assign it to the target? Why assign it to either account?

My mental model (admittedly I may be mental) is different from the example. I do have the two accounts and the notion of moving funds from on to the other. I diverge in that my mental model does not imagine the act of fund transfer being a role of either account. My model imagines that role belonging to the bank.

This mismatch keeps getting in the way of my willingness to appreciate DCI.

This is the most pretentious, long-winded, non-scientific article written by academics imaginable.

"Hey, you foolish practitioners, you are ignoring the user."

First off, their real world example is AWFUL and they're discredited on that basis alone. And I quote:

@2. System displays valid accounts

WHAT? Are you seriously going to put a System class in your problem domain analysis classes? That will adversely affect the gross application structure, which is SUPPOSED to directly reflect the problem domain. You are sharing server-side concerns with the client. Why stress your network that much? This example basically mimics many awful ATM controller software examples in various books alleging to teach OOA.

I will stick with Allen Holub's brilliant criticism of MVC: It is fundamentally not OO, because of Controllers. Object-oriented programs should RENDER THEMSELVES. Objects should be able to render themselves. Not being able to meet this constraint is an obvious code smell that you should clean up. This code smell is typified by frameworks like Struts.

This is actually why understanding how real-time OO engineers apply OO can be so helpful to enterprise programmers. Once you start thinking in real-time terms, chatty protocols become obviously unacceptable. The fact they deal with subsequences at the application protocol level is an analysis smell: You're coding low-level details at the highest layer of your architecture, which will make it impossible to stabilize design, forcing recompilation. Today, most MVC approaches use IoC containers and pluggable architectures to try to skip as much recompilation as possible, but mentally they're still recompiling.

My designs are pretty stable, although I refuse to call them MVC, because of the use of Controllers. My objects render themselves - they don't expose implementation details simply so that a delegate can render for them. And you know what? I rarely recompile these days.

The much bigger question is how to build OO systems based on REACTIVE MODELS. Reactivity gives you a direct mental model for any user.

Most programming models support event-driven programming only through in-version of control. Instead of calling blocking operations (e.g. for obtaining userinput), a program merely registers its interest to be resumed on certain events(e.g. an event signaling a pressed button, or changed contents of a text field).In the process, event handlers are installed in the execution environment whichare called when certain events occur. The program never calls these event han-dlers itself. Instead, the execution environment dispatches events to the installedhandlers. Thus, control over the execution of program logic is "inverted".

[b]Virtually all approaches based on inversion of control suffer from the followingtwo problems[/b]: First, the interactive logic of a program is fragmented acrossmultiple event handlers (or classes, as in the state design pattern [13]). Second,control flow among handlers is expressed implicitly through manipulation ofshared state [10].

What they really should have said, though, is that most IoC techniques are flowcharts in disguise. Containers like Seam actually realize this on some level and instead of disguising the flowcharts, design flowcharts into the Seam system top-down, with Subversion of Control and Seam bijection. Combined with declarative state management and tight integration with JSF and EJB 3, Seam code can be written in a conversational style. This is still not ideal, because it doesn't force programmers to do OO analysis.

I feel like I'm writing an article in Harvard Business Review: "Brushing Teeth and Eating Veggies Surprisingly Good For Well-being."

Figuring out how to force programmers to brush their teeth is an idea from an academic worth discussing. Let's start by explaining the bristles need toothpaste and the bristles scrub the teeth.

@overall, there are several things about DCI that sounds like things already said elsewhere, and it is a bit frustrating that the copious # of words were not better used to help give a sense of how DCI relates to things people already know, be it functional programming (the general idea that algorithms matter and should be able to be clearly expressed somehow in the ascii of the source code vs. the shattering that has happened via bog-standard oop), or context-oriented programming.

Not only have they been said elsewhere, but they've already been taken further toward their logical conclusions.

For instance, I do a lot of dynamic composition where users can construct their own objects on the fly. Dynamic, run-time composition is essentially what the Open-Closed Principle is all about.

Yet the real trick is *how* you provide dynamic composition. Some of my personal objectives are aimed at eliminating accidental complexity, because I find it eliminates the most lines of code. A really good barometer for complexity is the number of states in the system. For instance, coding something as orthogonal using and-decomposition of states when it is best represented using or-decomposition will significantly effect your program's statespace -- the orthogonal version is a CROSS PRODUCT.

When will academics learn that it is no longer interesting to talk about single groups of functions in isolation, but instead focus on architecture and how they fit together as a whole?

Those who have actually attempted this were bashed by Brooks' No Silver Bullet (Lieberherr and Kiczales [modularity], and especially Harel [complexity]).

@My mental model (admittedly I may be mental) is different from the example. I do have the two accounts and the notion of moving funds from on to the other. I diverge in that my mental model does not imagine the act of fund transfer being a role of either account. My model imagines that role belonging to the bank.

It's not "just you". I would go a step further and say the bank is a very high-level analysis class representing a large subsystem (and may not even be something the domain expert talks about in the problem domain), and that the analysis class for this scenario is a TransferSlip.

OO textbooks do often present the Visitor pattern with a so-called pedagogical example of transactions with double-dispatch. I consider this an anti-pattern, however.

By the way, Jim, don't take my rant personally. However, I'm trying to get you to think clearly.

If you are going to present a solution to something, you actually need to describe:

(1) what the problem is(2) what contributions you can make(3) experimental results, including prototypes, field studies and project data

More importantly, practitioners have a hard time relating to abstract concepts when there is no connection to a real world problem they have.

By presenting the strawman that "programmers just don't get MVC, but it works", you're basically saying all we really need to do is use something correctly. At which point, I'm wondering why the rest of the article isn't focused on teaching it correctly.

Instead, the article diverges and becomes a sermon about how we've all overlooked some practically fundamental architectural concepts -- "traits", if you will.

Actually, in my 14 years as a programmer, my biggest benefit to an architecture that includes a GUI is what I call the designer-developer facade. Pretty much every system I've seen others build gets this facade incorrect, because once you have to dynamically vary the UI -- for any reason -- you take the design tools out of the designers hands and force him/her to describe the interaction effects to a developer. This doubles resources, and frequently the developer is untrained in how to program complex reactive systems where arbitrary events can occur. As a result, copious bugs pour in. The end result is frequently event handlers with ad-hoc case statements.

There are a few exceptions. For instance, the best selling video games Crash Bandicoot 1 through 3 were architected by one of the top 5 Lisp programmers in the world: Andy Gavin. After several iterations, it was stratified into two main subsystems: a Lisp language for game engine subsystem programmers and a DSL for artists and scene designers. More recently, two members from that 9 person video game company (including cofounder Jason Rubin) started a dotCOM called Flektor, which embodied much of the same principles.

So it is possible to create a designer-developer facade.

But you fail to tell me how to do this (I know how, but just follow me).

Your traits also give the appearance that you're creating mini-classes to provide ad-hoc case statements. The only difference between your classes and an ad-hoc case statement is that you've got type-checking; you've added a little bit of structure. In fact, Odersky recognizes this by providing a switch statement on steroids that can do structural pattern matching. However, you don't explain why that incremental structure improves code clarity, testability or conceptual design.

You also neglect that a parts-centered object model gives you stronger compile-time type checking for what you are trying to accomplish than a specialization-centered object model, hence why you mentioned trying to avoid traditional class inheritance. However, all you do is provide extension methods with a friendship relationship to a class. That's not an awful thing, and the C++ Client/Attorney idiom can be used to good effect, but it is an implementation detail far apart from the architectural level that MVC addresses. It's also far apart from the level of abstraction of considering a system with an architectural constraint of a *pure* developer-designer facade.

Pure abstractions are much harder to design than pure functions, but they give us the ESSENCE Brooks' talks about. Yet, should these abstractions change at the architectural level, they are changing to reflect real world realities. If the roles of designers and developers change, then they will likely change in ways that allow each person at opposite ends of the role the least amount of grief possible. For that reason, you stabilize design around that architectural constraint, not MVC or DCI.

Yes. A Use Case is a contract between an end user and a system under construction. Too many programmers take this beyond analysis into design. The Use Case ends up being pseudo-code that over-constrains the ordering of processing as well as other design choices. A Use Case should focus on "what," not "how."

Are you seriously going to put something other than the system in your problem domain analysis?

By the way, I have no idea what a "problem domain analysis class is." Can you be more precise?