Summary
In his weblog, Keith Braithwaite writes, "There is no mechanism available to the Smalltalk programmer to create programs other than the creation of EDSLs (Embedded Domain Specific Languages)," and suggests that is the source of Smalltalk's productivity.

Advertisement

In his weblog entry, Keywords, Magic and (E)DSLs,
Keith Braithwaite writes, "There is no mechanism available to the Smalltalk programmer to create programs other than the creation of EDSLs (Embedded Domain Specific Languages)."

He gives an example of how you might make an embedded DSL on Smalltalk, and suggests that the reason Smalltalk (and Lisp) makes the creation of DSLs so easy is that the language provides a very small core that everything else is built on, and contrasts this with Java.

Building DSLs is the bread and butter of Smalltalk (and Lisp) programming, but is a bit of a struggle in the Java (and similar) worlds. The big vendors are attempting to fix this through the use of mighty tools in the interests of supporting a new-but-old-but-new model of development, a rather fishy proposition at best.

This is symptomatic of one way in which the industry has decayed. The message of Smalltalk (and Lisp) is that the route to productivity is to use simple tools with few features and allow everyone interested to build upon them. The favoured route at the moment is to encode every good idea into an all-singing all-dancing "solution", take it or leave it.

I believe that DSLs allow you to raise the level of abstraction to match your particular domain, enabling you to program with clarity and a minimum of keystrokes. Having spent most of my career in C, C++, then Java, the way I've always created DSLs is by defining grammars and code generators with the help of a parser generator such as Yacc/Lex, JavaCC, or ANTLR. The ability to twist the general programming language itself into a DSL was something I've also heard claimed as a strength of Ruby, and this approach to DSLs was described in the article, Creating DSLs with Ruby. This article attempts to describe the tradeoffs of these two approaches to DSLs:

A DSL, or domain specific language, is a (usually small) programming or description language designed for a fairly narrow purpose. In contrast to general-purpose languages designed to handle arbitrary computational tasks, DSLs are specific to particular domains. You can create a DSL in two basic ways:

Invent a DSL syntax from scratch, and build an interpreter or compiler.

An advantage to the second approach is that you save time because you don't have to write and debug a new language, leaving you more time to focus on the problem confronting the end-user. Among the disadvantages is that the DSL will be constrained by the syntax and the capabilities of the underlying general-purpose language. Furthermore, building on another language often means that the full power of the base language is available to the end-user, which may be a plus or minus depending on the circumstances.

I think an important path to productivity is to raise the level of abstraction of your program to match the specific problem domain you are working on. For example, we had a requirement for versioned data, and devised a way to version data that we wanted to use consistently everywhere in our application where versioned data was called for. We created a DSL in which we describe our entities, and can make an entity versioned by the addition of one keyword: versioned. I have not had the experience of creating an embedded DSL, but I'm curious to what extent the ability to raise the level of abstraction while still working in the general purpose programming language is the source of productivity in languages such as Ruby and Smalltalk, as Keith Braithwaite suggests in his weblog. If you've had such an experience, please describe it in the forum discussion for this post (link below).

"to what extent the ability to raise the level of abstraction while still working in the general purpose programming language"What is the difference between domain-specific user defined types and an "Embedded" DSL? What is the difference between domain-specific class libraries and an "Embedded" DSL?Is there a difference, or is it simply that DSL's are seen as more productive and we'd like to be able to say we also have some kind of DSL? Is this just neologism?

1) imo some of the "Keywords, Magic and (E)DSLs" examples are a little too self-serving. We've named the Smalltalk method parameters with care

2) The Smalltalk ifTrue:ifFalse: example isn't correct for some Smalltalks because the implementation of ifTrue:ifFalse: is specially inlined by the compiler - even if we define ifTrue:ifFalse: in String we still get the same Boolean behaviour.

3) Keith Braithwaite asks "What about those of is who are not fortunate enough to be able to use Smalltalk in our work":

> "to what extent the ability to raise the level of> abstraction while still working in the general purpose> programming language"> What is the difference between domain-specific user> defined types and an "Embedded" DSL? > What is the difference between domain-specific class> libraries and an "Embedded" DSL?> Is there a difference, or is it simply that DSL's are seen> as more productive and we'd like to be able to say we also> have some kind of DSL? Is this just neologism?> I had the same questions when reviewing the "Creating DSLs with Ruby" article. To me a language implies a grammar, but in the case of embedded DSLs, the grammar is really that of the general purpose programming language. The answer that came back from the Ruby guys was that it really is more an API, but in a stretchy language like Ruby, it *appears* more like a DSL. It looks and feels like a domain specific language, even if it really is a domain specific API in a general purpose language. Although it isn't enforced, you could express a grammar for an embedded DSL, and if you just use that, then it acts like a DSL with a grammar, but it was likely easier to create than if you'd used a parser generator.

> Use a JVM language with named method parameters for> something more like keyword methods, Nice.> http://nice.sourceforge.net/manual.html#namedParameters> > Use a JVM language with features that allow the definition> of new statements, Scala.> http://scala.epfl.ch/intro/targettyping.html#stats> I've been reading a lot about Scala the past week, and it is very interesting. One of the categories Scala seems to fall into is the category of a "no magic" language, as Keith Braithwaite would put it. Everything in the language seems to be built out of a few core ideas. Scala seems really elegant, but I'm not sure that means, as Keith seems to conclude, that it would therefore be easy to make embedded DSLs in Scala. I wonder if what people really mean by embedded DSL is that you can get express higher level abstractions with minimum bits of syntax, as you can with a grammar-based DSL.

Although it isn't enforced, you could express a grammar for an embedded DSL...Maybe there's a straightforward distinction - if we actually do write a grammar then we have an embedded DSL, otherwise we just have hype wrapped around class libraries :-)

I wonder if what people really mean by embedded DSL is that you can get express higher level abstractions with minimum bits of syntax, as you can with a grammar-based DSL.Things become interesting when we can step outside the embedding language, like implementing a Prolog dialect in Smalltalk.http://prog.vub.ac.be/research/DMP/soul/SOULManual.pdf

> Half the time, when the ruby> folks talk about DSLs, they're only talking about APIs> that happen to not have parentheses.>Yes, I have come to the same conclusion, but I don't think that's bad. They don't mean language in the context-free grammer/parser/code-generator sense, but in the sense of syntax that allows me to express myself at the domain level. I think that's the way Smalltalkers and LISPers are really talking about it too.

Martin Fowler's short essay, which Isaac linked to, included a link to this interview with Dave Thomas:

Dave I think really clarifies the difference between metaprogramming with code generators in Java and C++ and doing it dynamically with a language like Ruby:

Once you start working in dynamic languages such as Ruby, code generation takes on a whole new meaning, as you can effectively extend the language at runtime from within. This let's you do a boatload of stuff that you'd normally do with code generators.

And he also says:

I rarely (if ever) write a code generator that generates Ruby code: there's just no need, as Ruby is dynamic enough to let be do what I want without leaving the language.

That may capture much of what Keith Braithwaite is trying to say about productivity with Smalltalk. You can mold the language like clay into what you need it to do. I haven't had much experience working with languages like Ruby , Python, Smalltalk, LISP, but certainly with Java and C++, I have often felt the urge to automate with code generators.

> That may capture much of what Keith Braithwaite is trying> to say about productivity with Smalltalk. You can mold the> language like clay into what you need it to do. I haven't> had much experience working with languages like Ruby ,> Python, Smalltalk, LISP, but certainly with Java and C++,> I have often felt the urge to automate with code> generators.

There are definitely times when the rigidity of Java requires some sort of code generation but I also think that sometimes this need is coming from a ideology of design. THe one time I really needed code generation in Java was when I was required to write umpteen JavaBeans with dozens of getters and setters. Aside from there being to many properties per type, I feel the JavaBean approach meant hundreds of methods that were basically identical. I wasn't able to have reuse. I suggested a hybrid approach with a map structure so I could encapsulate this repetetive logic in a single method but it was shot down as "too complex". I see this kind of thing over and over again. Reams of Java classes that only serve structural purposes.

I definitely think that Java could use a few features but in some ways I think we are victims of poor design methodology and not the language.

> There are definitely times when the rigidity of Java> requires some sort of code generation but I also think> that sometimes this need is coming from a ideology of> design. THe one time I really needed code generation in> Java was when I was required to write umpteen JavaBeans> with dozens of getters and setters. Aside from there> being to many properties per type, I feel the JavaBean> approach meant hundreds of methods that were basically> identical. I wasn't able to have reuse. I suggested a> hybrid approach with a map structure so I could> encapsulate this repetetive logic in a single method but> it was shot down as "too complex". I see this kind of> thing over and over again. Reams of Java classes that> only serve structural purposes.> > I definitely think that Java could use a few features but> in some ways I think we are victims of poor design> methodology and not the language.>I don't think that code generation is a sign of a poorly designed language, as Dave Thomas suggests in his interview:

I think you consider code generators when you bump up against a language limitation. If the language is forcing you into duplication, or into some ridiculous contortions, then use a code generator to fix the problem. I'm just saying you'll bump into those limitations a lot sooner in Java and C# than you might in better designed languages.

Yes, in Ruby I don't have to make a grammar to do metaprogramming, I can just add fields and methods to objects dynamically at runtime, but I also lose static compile-time type checking. That's a tradeoff, not a flaw. Whether or not the static approach is appropriate depends on the situation. Sometimes the right tradeoff is to make the programming go a bit slower in exchange for faster runtime, or more checking at compile time.

Your example of JavaBeans versus a map sounds a bit like options we had for our generated entities, which are like JavaBeans (not enterprise JavaBeans, but in that spirit--POJOs that hold data in private variables and have get and set methods). In the discussion the followed my code generation blog, Sasha Ovsiankin suggested this:

...if we want code generator to enable us write "article.getAuthor()", why not to write instead something like 'article.get("author")' which can be implemented in runtime and doesn't require code generation? Granted, the former is faster and a bit more readable (only a bit), but we can specifically optimize some 3% of the calls, while leaving the rest totally dynamic.

The answer is that article.getAuthor() gives me more static type checking than the dynamic approach, and I think the code that uses the API will be more clear. In the case of your JavaBeans versus a map, perhaps there was a similar tradeoff. If you are going to check types at compile time, you can't add methods at runtime, because the methods are part of the definition of the type. Static type checking is not a "ridiculous contortion" in my opinion. It is a tool we can apply to good advantage in appropriate situations. And it means that to get at metaprogramming, you'll do more code generation.

> I don't think that code generation is a sign of a poorly> designed language, as Dave Thomas suggests in his> interview:> > I think you consider code generators when you bump up> against a language limitation. If the language is forcing> you into duplication, or into some ridiculous contortions,> then use a code generator to fix the problem. I'm just> saying you'll bump into those limitations a lot sooner in> Java and C# than you might in better designed> languages.> > Yes, in Ruby I don't have to make a grammar to do> metaprogramming, I can just add fields and methods to> objects dynamically at runtime, but I also lose static> compile-time type checking. That's a tradeoff, not a flaw.

You can do dynamic programming in Java through reflection. It's just very ugly.

> Whether or not the static approach is appropriate depends> on the situation. Sometimes the right tradeoff is to make> the programming go a bit slower in exchange for faster> runtime, or more checking at compile time.

I'm not disputing this. My point is that I see a lot of tunnel vision in that there is only one way to do things in Java.

> Your example of JavaBeans versus a map sounds a bit like> options we had for our generated entities, which are like> JavaBeans (not enterprise JavaBeans, but in that> spirit--POJOs that hold data in private variables and have> get and set methods). In the discussion the followed my> code generation blog, Sasha Ovsiankin suggested this:>> ...snipped>> The answer is that article.getAuthor() gives> me more static type checking than the dynamic approach,> and I think the code that uses the API will be more clear.

My hybrid approach was to provide the getters and setters but to forward them to a map. It would be runtime checked but because the insertion was controlled, this was pretty much irrelevant.

The value was that the getters and setters could be written (or generated) once and the common logic defined in a single place.

> In the case of your JavaBeans versus a map, perhaps there> was a similar tradeoff. If you are going to check types at> compile time, you can't add methods at runtime, because> the methods are part of the definition of the type.

The problem with this is that JavaBeans are geared towards use with reflection so they end up being dynamic and your type checking becomes runtime too.

The Java bean spec could have been defined using an interface that provided all the information we have now without all the reflection and redundant code.

> Static> type checking is not a "ridiculous contortion" in my> opinion. It is a tool we can apply to good advantage in> appropriate situations. And it means that to get at> metaprogramming, you'll do more code generation.

I think strong typing is useful but after doing a lot of careful thinking about it, I don't see that the compile time checking provides that great of a value in getting a program right. The value of types becomes clear in the maintenance phase of the application when you want to modify a type.

> You can do dynamic programming in Java through reflection.> It's just very ugly.>Yes, which tells me that it is not intended to be the "normal" way to do things in Java.

> > Your example of JavaBeans versus a map sounds a bit> like> > options we had for our generated entities, which are> like> > JavaBeans (not enterprise JavaBeans, but in that> > spirit--POJOs that hold data in private variables and> have> > get and set methods). In the discussion the followed my> > code generation blog, Sasha Ovsiankin suggested this:> >> > ...snipped> >> > The answer is that article.getAuthor()> gives> > me more static type checking than the dynamic approach,> > and I think the code that uses the API will be more> clear.>> My hybrid approach was to provide the getters and setters> but to forward them to a map. It would be runtime checked> but because the insertion was controlled, this was pretty> much irrelevant.>OK. I don't quite understand the design, but I don't know the context either.

> The value was that the getters and setters could be> written (or generated) once and the common logic defined> in a single place.>But usually getters and setters just get and set. So what logic was common that you were factoring out?

> The problem with this is that JavaBeans are geared towards> use with reflection so they end up being dynamic and your> type checking becomes runtime too.>Java has all kinds of runtime checks too, pretty cool stuff in the VM spec that ensures the safety of executed code. But regardless of what's going on at runtime, when I compile code that has JavaBeans get and set methods in it, all the types are statically checked at compile time.

> The Java bean spec could have been defined using an> interface that provided all the information we have now> without all the reflection and redundant code.>Can you elaborate? I'm not quite following you here.

> I think strong typing is useful but after doing a lot of> careful thinking about it, I don't see that the compile> time checking provides that great of a value in getting a> program right. The value of types becomes clear in the> maintenance phase of the application when you want to> modify a type.>I think strong typing is overkill for some things, but helpful for others. Like for our build automation I think static type checking is overkill, but for our new web architecture, I think it is helpful. I can list several ways in which having the type information helps:

- It finds type errors at compile time, sooner rather than later.- It gives my IDE more information that it can use to help me refactor more quickly and safely, lookup lists of methods to call after a dot, etc...- At runtime it gives the optimizer more information to work with, which can help it do a better job of optimizing.- I can easily find all the places that will be affected by an API change by changing the API and doing a compile. The compiler gives me a list of all the places that were broken (except those that use reflection and go around the static type checke)- It makes the code more explicit, though in Java at the cost of a lot of verbosity. (I like type inference, because it makes the code smaller but an IDE could tell me what any type is if I ask. Scala, which I've been reading about this week, has concise syntax and expressiveness, but is strongly typed. I really like what I've read about it so far.) So there's a bit of a tradeoff here in that it is more work to finger-type, but when reading, I can get an answer to what type things are.

I think static type checking helps especially in projects that need to scale up to a lot of code.

Can you elaborate on why you think compile time type checking doesn't help much in getting the program right, but that when maintenance time comes, it is useful?

> But usually getters and setters just get and set. So what> logic was common that you were factoring out?

Setting dirty markers, firing off events to listeners.

If all the getters and setters do it get and set, why do you need them at all? They are equivalent to public variables.

> > The problem with this is that JavaBeans are geared> towards> > use with reflection so they end up being dynamic and> your> > type checking becomes runtime too.> >> Java has all kinds of runtime checks too, pretty cool> stuff in the VM spec that ensures the safety of executed> code. But regardless of what's going on at runtime, when I> compile code that has JavaBeans get and set methods in it,> all the types are statically checked at compile time.

Not if you are using them for reflection. If you are not, there are lot's of more effective approaches to design. Getters and setters are basically just fancy attributes.

> > The Java bean spec could have been defined using an> > interface that provided all the information we have now> > without all the reflection and redundant code.> >> Can you elaborate? I'm not quite following you here.

> > I think strong typing is useful but after doing a lot> of> > careful thinking about it, I don't see that the compile> > time checking provides that great of a value in getting> a> > program right. The value of types becomes clear in the> > maintenance phase of the application when you want to> > modify a type.> >> I think strong typing is overkill for some things, but> helpful for others. Like for our build automation I think> static type checking is overkill, but for our new web> architecture, I think it is helpful. I can list several> ways in which having the type information helps:> > - It finds type errors at compile time, sooner rather than> later.

How many errors do this help you find? I can count the number over 6 months on one hand these errors are not the ones that keep me late at work.

> - It gives my IDE more information that it can use to help> me refactor more quickly and safely, lookup lists of> methods to call after a dot, etc...

This I agree with. But you can get this without the type checks. All you need is the type. It's really a subtle difference but by making casting implicit, it would make a lot of code more clear.

The question is whether you allow calling an arbitrary method on an Object of unknown type. I'm not sure that's a good choice for Java.

> - At runtime it gives the optimizer more information to> work with, which can help it do a better job of> optimizing.

Again, this is accomplished by types, not static type checking.

> - I can easily find all the places that will be affected> by an API change by changing the API and doing a compile.> The compiler gives me a list of all the places that were> broken (except those that use reflection and go around the> static type checke)

This (dependency checking) is the true value.

> - It makes the code more explicit, though in Java at the> cost of a lot of verbosity. (I like type inference,> because it makes the code smaller but an IDE could tell me> what any type is if I ask. Scala, which I've been reading> about this week, has concise syntax and expressiveness,> but is strongly typed. I really like what I've read about> it so far.) So there's a bit of a tradeoff here in that it> is more work to finger-type, but when reading, I can get> an answer to what type things are.

The problem I have with Scala is that it has some non-intutitive features like the 'with' syntax. It uses implied extension (IIRC) instead of explicit calls to super as Java does.

> I think static type checking helps especially in projects> that need to scale up to a lot of code.> > Can you elaborate on why you think compile time type> checking doesn't help much in getting the program right,> but that when maintenance time comes, it is useful?

Because as I work with dynamic tools, I see that the code gets done just as well, it's just that later it's very hard to figure out what it's doing or how it can be changed without breaking things. Java code also gets done just more slowly but is (usually) easier to transform in place. For this reason, I am experimenting with Jython and Java in unison. Java for the 'hard' core components and Jython for the more trivial and less reusable pieces. This 'good cop, bad cop' approach has a lot of promise in our work.

In addition, I've been using Java without generics for 7 years now and I have never had a production ClassCastException that didn't involve reflection or dynamic class loading. Making developers type "(String)" a million times is not making code more reliable. It's mainly a waste of time and clutters up the code.

"But regardless of what's going on at runtime, when I compile code that has JavaBeans get and set methods in it, all the types are statically checked at compile time."

Unless your JavaBeans are meant to be used in a framework like Struts that depends heavily on reflection. You change an attribute, everything compiles fine, and then the app just crashes with a completely unintelligible error message. You have to make a text search to find out what is being used where, and then the names may be ambiguous... Frustrating experiences with reflection are part of the reason why I prefer static typing. And I agree that it is more important for maintenance then for getting the program right in the first place. You usually know the structure of the code you are currently developing, but understanding the dependencies in legacy code can be very hard. This would be less of a problem with well-designed code, but in our industry, well-designed code is a privilege, not a right.

> If all the getters and setters do it get and set, why do> you need them at all? They are equivalent to public> variables.>I think I see what you're getting at. You're right about get/set being equivalent to a public instance variable, except that get/set does provide a layer of abstraction that allows you in the future to change their implementation without breaking clients. Admitedly that doesn't happen very often. In our case, though, the getters are type checked, but the setters are often private and used only by Hibernate via reflection. Some of our setters are public, because they are mutable fields. So in that case, it is really just info hiding and conforming to the standard way of doing attributes in Java.

> interface Reflection> ...>> java beans basically treat Objects like Maps.>I see. You're saying that although with this interface you're not getting static type checking, if your are accessing the bean attributes via reflection, you aren't getting static type checking with that approach either. In my entity case, we were using reflection for most set methods, but all the clients were explicitly calling the get method, which was statically checked. And by having private set methods, we get info hiding. In the generic JavaBeans case, I think there are a lot of non-reflective use of beans, in which case the get/set approach would be preferred (in my opinion) because it is more explicit and more type safe. So you get type safe direct access, and you also have the possibility to use reflection.

> > - It finds type errors at compile time, sooner rather> > than later.> > How many errors do this help you find? I can count the> number over 6 months on one hand these errors are not the> ones that keep me late at work.>Do you use an IDE? Do you ever select from a list of methods to call that make sense for the type of the variable? Do you get a little red squiggle sometimes that you then fix? All those are fancy ways of preventing or fixing compile-time type errors. That happens to me all the time. One of the things I really like about these modern IDEs is that I get fewer compiler errors of any kind, because as I go I'm notified of just about anything that won't compile.

I also get type errors occasionally that the compiler doesn't find. For example, if I have a method that takes three Strings in a row, sometimes I might get them out of order. That happens now and then too, and since it isn't checked at compile time, I find out at runtime (or unit test time).

I think the more useful way to ask this question is how many errors does static type checking help you find that you don't find via unit testing, or quickly via plain old testing, using a non-static language such as Python, Ruby, or Smalltalk. I find that static type checking catches all kinds of problems, but the question is, is it worth the added programmer pain compared to dynamic alternatives? My feeling about that is that the answer depends on the situation. For our build automation, for example, I think the answer is that static types is not worth the pain. For our web application, I think it is.

> > - It gives my IDE more information that it can use to help> > me refactor more quickly and safely, lookup lists of> > methods to call after a dot, etc...> > This I agree with. But you can get this without the type> checks. All you need is the type.>I agree. It isn't the checking that I'm talking about in general, although that's what I kept referring to in my previous post, it is the specification of types in the program that I'm talking about. In other words, my point is not type checking so much as about specifying the types in the program. This is what Dave Thomas seems to consider a poor design choice for a language, and I was trying to challenge that view.

> It's really a subtle> difference but by making casting implicit, it would make a> lot of code more clear.> The question is whether you allow calling an arbitrary> method on an Object of unknown type. I'm not sure that's> a good choice for Java.>What do you mean by making casting implicit?

> > - At runtime it gives the optimizer more information to> > work with, which can help it do a better job of> > optimizing.> > Again, this is accomplished by types, not static type> checking.>Agreed. Another one is that a static analyzer has more information to work if the variables have types. I expect FindBugs to help me a bit more than pychecker (though I have little experience with either). I just expect that the more information the analysis tool has at its disposal, the more useful the analysis can be.

> This (dependency checking) is the true value.>Especially as the program grows in size.

> ...As I work with dynamic tools, I see that the code> gets done just as well, it's just that later it's very> hard to figure out what it's doing or how it can be> changed without breaking things. Java code also gets done> just more slowly but is (usually) easier to transform in> place. For this reason, I am experimenting with Jython> and Java in unison. Java for the 'hard' core components> and Jython for the more trivial and less reusable pieces.> This 'good cop, bad cop' approach has a lot of promise in> n our work.>This is something I've wanted to do too, use Java to build the API bricks that I then script against with a less verbose language. But I've never been able to let go of the refactoring support I get from my IDE when I stick with Java everywhere. How have you handled that? In other words, if you say script your controllers in Jython, and maybe your tests, and you go to do some major refactors of the Java part, don't you have to change all that Jython stuff by hand? The refactoring browser is one of those analyzers that can do more useful things because the type information is there. In the Jython portion, even if the IDE supported refactoring, I don't see how it can do as good a job.

What I've taken to doing instead is create DSLs. Our little DSLs are very concise, but the generate the verbose Java code that has strong types. So in cases where I can justify a DSL, I get what I want. Trouble is I force us to write things many times over by hand before creating a DSL to make sure we know how to abstract. In the long term, I think we'll be quite fast because of our DSL-code-generators, but in the short term, I'm pretty much going at a typical Java pace.

> "But regardless of what's going on at runtime, when I> compile code that has JavaBeans get and set methods in it,> all the types are statically checked at compile> time."> > Unless your JavaBeans are meant to be used in a framework> like Struts that depends heavily on reflection. You change> an attribute, everything compiles fine, and then the app> just crashes with a completely unintelligible error> message. You have to make a text search to find out what> is being used where, and then the names may be> ambiguous... Frustrating experiences with reflection are> part of the reason why I prefer static typing. And I agree> that it is more important for maintenance then for getting> the program right in the first place. You usually know the> structure of the code you are currently developing, but> understanding the dependencies in legacy code can be very> hard. This would be less of a problem with well-designed> code, but in our industry, well-designed code is a> privilege, not a right.>This brings up a good counter example that is another way to answer James Watson's question about how often the compiler finds type errors. Frank and I looked at Struts and a few other Java web frameworks a couple years back, and decided we didn't like them and wrote our own. I avoid reflection in Java designs because of I want to get as much static type checking as possible out of the compiler, and our home grown web framework does not use reflection. However, we do use dynamic types in one way: Velocity. Velocity takes a context map that maps string names to object values, and dynamically using reflection merges a template with the values of little variables like $name and $bean.attribute. We do have problems occasionally that we discover only by running the program of putting the wrong type into the context, or forgetting to put something in, and so on.

I want to automate the testing of our templates (which is further complicated because the templates can be localized) to do a kind of type checking. In other words, by looking at our controller DSL, I can generate a test that checks to make sure all localized incarnations of each template uses only things that are passed in the context (and I will automatically generate code that fills the context.) I expect that kind of static type checking at compile time (well, in this case, automated test time) will be invaluable as we scale to lots and lots of templates and multiple locales. And this I think illustrates how useful static type checking in regular software is as a system scales to lots of code.

Am I the only one who codes "error free" thanks to the IDE static type checking facilities? it's the nature of the language that gives the Java IDE their power. That productivity I haven't seen elsewhere. Dumb syntax checking everyone has and is a convenience. Being able to completely invert a large application design in 5 minutes with no errors or bugs *without* even having been the original author, that's power.