Saturday, November 07, 2009

In my previous post I presented the DataBinding[T] type (T => NodeSeq => NodeSeq) and illustrated how classes inhabiting this type could be used along with an implicit binder function to create modular, reusable rendering logic for objects. In this post, I will discuss how to assemble these components and strategies for handling rendering over inheritance hierarchies.

This DataBinding[Order] is a trait, not an object like the previous binding instances. In addition, the other DataBinding instances that it delegates to are abstract implicit vals - meaning that I will need to create a concrete implementation of this trait to actually use it. The interesting thing here is that these vals are declared implicit within the scope of the trait, but do not need to be declared implicit when the trait is extended and made concrete. I can then compose what I want my binding to look like at the use site (say an ordinary snippet) like this:

The ShowOrderBinding selects the bindings I want for the components of the order in a declarative fashion, and since it is an implicit object in scope, it is selected as the DataBinding instance to render the order when order.bind is called. If I decide that I want to use the AdminAddressBinding from the previous post instead, I simply substitute it for MailingAddressBinding at the use site and I'm done!

There's one more thing that needs to be done in the above code - to implement TransactionBinding. This is a little tricky because there are several subclasses of Transaction that may be in the list of transactions for an order, so I'm going to need to recover type information about those instances at runtime in order to be able to choose the correct binding for a given Transaction. Furthermore, there's some information in the base Transaction type that I don't want to have to repeat bindings for in each subtype binding, so I'm going to use function composition to "add together" two different bindings.

There are two different approaches I'm going to show for how one might go about doing this: first, I'll try using a straightforward Scala match statement to figure out the object's type. That looks like this:

Here, I've created a base binding that defines some simple behavior for how to bind elements that are common to all transactions. I then compose that generated binding with a binding for the appropriate subclass. The determination of the subclass binding is of necessity a bit boilerplate-laden, but that's what happens when one throws away compile-time type information. The alternative, of course, would be to store each type of transaction in a separate collection associated with the order, but that's not the way I've gone for now.

There is, however, a hidden bug that may lurk in the pattern match above, depending upon your environment. Let's supposed for a minute that our model classes are persisted by Hibernate - and that the list of Transactions is annotated such that it is lazily, instead of eagerly, fetched from the database. In this case, the transactions returned may be translucently wrapped in runtime proxies. I say translucently instead of transparently because there is one critical respect in which a proxy does not behave in the same manner as the class being proxied - how it responds to the instanceof check that underlies the Scala pattern match. In this case, we will need to use a Visitor to correctly get the type for which we want to delegate binding. To do this, we'll modify our transaction classes as so:

Since the correct overloading of TransactionFunc.apply will be statically determined, there is no possibility of a proxy getting in the way of our binding.

So, now we have a set of tools for binding composition. Here are some additional ideas of where this might be able to go, given the appropriate changes to Lift.

In my example of the showOrder snippet above, the implementation is trivial... so trivial, in fact, that it almost doesn't need to be there. If the notion of a binding was fully integrated into Lift, we could make Loc somewhat more powerful. Loc is one of the primary classes in Lift used to generate the SiteMap, which declares what paths are accessible within a Lift application. A path in a Loc typically corresponds to the path to an XML template that will specify the snippets and markup for a request. An important thing to note though is that Loc is not simply Loc - it is in fact Loc[T], where by default T is NullLocParams, a meaningless object essentially equivalent to Unit.

When one looks at making T something other than Unit, Loc starts to become significantly more interesting - first, it declares a rewrite method that can be overriden to populate the Loc with an instance of T as part of its processing, and secondly, Loc can specify explicitly the template to be used to render the Loc, instead of using the one that corresponds to the path. Wait... we have an instance, and we have a template... what if we could specify a DataBinding[T] as well? In this case, we might not need for there to be a snippet at all!

Tuesday, September 29, 2009

I've been using Lift as the framework behind a small internal web application at my company for the past year or so. The experience has been generally positive, despite a few quibbles I have with how state is typically maintained and passed around within an application.

Lift takes a "view first" approach to building web applications, as opposed to the commonly favored MVC pattern. In practice, "view first" boils down to essentially this: when you create an XHTML template for a page, that template will contain one or more XML tags where the name of the tag refers to a method to be called with the body of the tag as an argument. Such a method will return the XHTML to be substituted in place of the original tag and its contents. In Lift parlance such a method is called a "snippet," and has the type NodeSeq => NodeSeq. In order to more fully understand the rest of this article, I recommend you read more about snippets here if you're not already familiar with them.

The interesting thing about this signature is that functions with this type can easily be composed. What's more, the method most frequently used to implement snippets, BindHelpers.bind, can be partially applied to its arguments to give a closure with the same signature.

When I'm designing a website, most of the time rendering a page (or even a snippet within a page) isn't about displaying the data from a single object - instead, I often want to render a whole object graph. Ideally, I want to be able to choose a rendering for each type of object I'm concerned with, then compose these elements to produce the final page. Templates for objects should be reusable and modular, and for any given page I should be able to pick between multiple renderings of any given object, in some places displaying a concise summary and elsewhere a detailed exposition of all the available data.

Furthermore, while I believe that "the meaning should belong with the bytes" I also think that the rendering should be as decoupled from the bytes as possible to allow for maximum flexibility. Hence, I don't want my model classes to have any sort of "render" method - the display is something that should be layered on above the semantics provided by the names and types of the members of the object.

With these goals and composition in mind, I started with the following tiny bit of code:

The principle idea is that in my application, I will create one or more DataBinding instances for each of my model types, and that then by simply declaring an implicit val containing the instance I want in the narrowest scope possible, I can call .bind on any object for which an implicit DataBinding is available, or .binding if I want to compose multiple such binding functions and close over the model instance. Furthermore, this gives me good ways to compose both over composition and inheritance hierarchies, as I'll show below.

Consider that I have the following simplified, store-oriented model objects. In Lift, these would probably be implemented using Mapper or JPA - in the app that I've been working on, these are Java classes persisted by Hibernate - certainly not things that I want to change in order to support rendering by Scala.

With this set of models, I have a few different ways that they will be rendered - the display to the users will not be the same as the display to site administrators, I may (or may not) want to reuse the same rendering code for a line item in an order as in a shipping transaction, and so forth. Let's take a look at how this ends up.

First, I want to create some DataBinding objects for my classes that are built just on primitive types. Let's start with User.

These implementations take advantage of partial application and the Scala type inferencer to turn the call to bind(...) into a closure with the type Binding, which is NodeSeq => NodeSeq. At this point, we're not taking advantage of the fact that these functions can compose, so we'll do so now a we build bindings for our more complex objects.

There's a lot more going on in this one. First off, we define our DataBinding as a trait rather than a concrete object, in order to take advantage of abstract implicit val declarations. When we go to instantiate this trait, we will provide concrete implementations of the various bindings that we want applied to the objects from which an Order instance is composed. The compiler will then be able to apply the "binder" conversion to get a Binder instance which closes over the instance with the appropriate DataBinding so that we can simply call instance.bind and pass either a path to a template, or a chunk of XHTML extracted from the input with chooseTemplate to get our final value.

Friday, April 24, 2009

"After you have spent a certain amount of time in the compiler, you willcome to feel angry resentment at every comment you encounter, because itmight be trying to mislead you! I have modified my editor not to show meany comments in scala sources so they cannot tempt me with their sirensongs of reasons and explanations. AH THE LIMITLESS SERENITY"

Seems to me that this technique is much more widely applicable than just to scala internals...

Tuesday, April 21, 2009

The question of whether to use a constructor or a factory method for object construction is not new; we've had this discussion for years in the Java community. Scala's approach to object construction has a few features that will undoubtedly reignite this debate.

On the one hand, ordinary object construction is significantly more powerful than in Java - first, the all the ordinary Java boilerplate of assigning constructor parameters to member variables is abolished. What in Java would look like this:

More significantly, Scala's trait mechanism allows for extension of a class definition at the use site; for example, if I have a trait that adds rendering logic to a class, I can mix it in only when I need it.

So Scala's constructors are really powerful and you really want to use them, right? But wait...

It turns out that there are some subtle issues that arise if you start adding more logic to constructors in Scala. The logic of a Scala constructor goes directly in the body of the class, and here's the tricky bit: this is also where other member variables that aren't just blindly assigned as constructor parameters are declared and assigned. In Java, any intermediate variables that you used within a constructor were unavoiably local; in Scala they can easily (and will, if care is not taken) become a permanent part of the object.

Let's look at something a little more complex. Consider a class that, as part of its construction, finds the most common element in a list and assigns both that element and the number of occurrences to member variables.

Now, there may be ways to implement this that avoid constructing and decomposing a tuple, but this is the most straightforward and efficient implementation I could come up with. A peek at the generated bytecode reveals something interesting, however:

What's up with line 93? I didn't want that spurious Tuple2 to hang around as a member field - I just needed it as an intermediate value in the construction of the object!

As it turns out, this problem is not restricted to tuples; if you use intermediate variables in the construction of your objects, they will become permanent residents. This may not be a real problem in most circumstances, but it feels messy. Now, the standard thing to do in this situation would to be to create a factory method on the companion object:

Now there's no intermediate variable stored in the bytecode for Common, but we have a new problem: we can no longer construct the object from an iterable while mixing in an additional trait at the instantiation site!

Scala supports the use of auxiliary constructors, with the caveat (similar to that present in Java) that the first statement in an auxiliary constructor must be either a call to the primary constructor, or another auxiliary constructor. Because of this constraint, we can't simply use the contents of the factory method above in an auxiliary constructor. We can, however, evaluate a method within the chained call, and that gives us a workable, if somewhat boilerplate-laden solution.

By threading the decomposition through a private constructor that takes a tuple, we can now avoid the spurious intermediate values getting incorporated into the class, and still enjoy the benefits of instantiation-site mix-ins. What's more, there has been a bunch of talk on the Scala mailing list about a future unification of tuples with method (and hopefully constructor) parameters - in which event the extra private constructor could disappear entirely!