I think this is the most likely threat to the Rails surplus, that C# or Scala or something can do a good enough job that people can double their productivity with far less of a change in mindset or tools, and eventually no one will care about the ten times (or whatever) productivity of Rails. “Good enough is good enough.”

Now I’ve not used C# much in anger (I have however seen some of the cool ideas pulled in from F#) so won’t comment on it, but I’ve now developed one large Rails app (multi-year, multi-person), one small Rails app and been peripherally involved in one medium & one small Scala projects (in addition to three Scala OSS projects), preceded by about 10 years of Java/C++/PHP projects.

So, I feel quite qualified to offer an opinion on the contention that Scala’s doubling of productivity is “good enough”. Too summarise this post early, this is just plain wrong [1], the productivity boost that is, not the “good enough” bit. Learning functional techniques will not only make you more productive, but at worst they’ll make you a better “Rails programmer”.

Note that I’m not going to address the “Rails vs. Scala” debate, which would be better rephrased as “Ruby vs. Scala”, or “Rails vs. Lift or a hypothetical Scala web app framework”.

Rails

I’m quite happy for people to claim the now standard “10x” productivity boost for new Rails apps, having been through a couple now, I can see how it gets you up and running quickly, mainly by taking a bunch of decision points away from you (that you’d have to make in a Java project for example, what O/R tool, what directories do I need, etc.) and how it simplifies a lot of the drudgery found in most web projects. Throw in some clunky higher order functions for good measure and you’re away!

Actually, I’m not entirely happy with accepting this, I’ve found that in my experience you save about 25% of total development time for small projects, but the curve flattens out the longer the project goes on (say, after a few months), if you add more people, if the project is complex or it’s not a webapp. If any of these get large, you loose the advantage of Rails. Ruby’s lack of a static type system also leads to issues, however I don’t want to address these in this post.

So I have no real problem with Rails for simple web projects; Rails is optimised for the general case, and when you stick to what it’s good for Rails is fine (ignoring the thorny issues of side-effects & composability). What I do have a problem with is claims that developers would want to settle for Scala, or, any (pure or not) functional language.

Scala

I’m not a functional programming guru, however I know enough to be dangerous and I like to improve the way I develop software. There are many compelling reasons why people love functional languages. They are succinct, elegant and composable. I’ve heard “functional programmers” [2] make claims that they can employ high-level abstractions (with funky names like monad & functor) to get massive increases in productivity (usually one to two orders of magnitude). What’s more, they can back up these claims also!

I can only speak first hand about the tools that I’ve used, but I can certainly see how people can make these claims. In what little Scala I’ve written, I’ve seen how you can drastically reduce the size of your code, increase its readability and improve your ability to reason about what’s going on [3]. And this has helped me a lot in my current Rails project!

I’ll say it in different terms, knowing functional programming techniques means you can write Rails apps faster (yes, better than 10x!) [4]. I’ve experienced this first hand on my current project. I’ve written code that would have taken me half a day in Java in 10 minutes, made possibly by only a miniscule knowledge of functional techniques (e.g. map & fold). There’s another post on countering claims that no-one ever uses these abstractions… I’ve used two folds and a map already this morning.

Also, once you learn the basics, with functional languages you also get a better handle on what it is you are actually doing as the language of the problem is the language you’re coding in. You stop thinking in terms of the machine, and more in terms of the problem you’re trying to solve. For example consider this piece of code from Furnace:

removeHeader(stream).filter(!newLines.contains(_)).take(40)

What does it do? Well it removes the header from a stream of bytes, filters out newlines and takes the first 40 bytes. Does the code look similar to the way I’ve described it?

When I first tried to write this code, I thought about the underlying stream of bytes, pulling them off one by one, what about buffering, what happens if I read too many and need to skip back, etc. My thoughts had been warped by years of imperative thinking. When I discussed this problem in the office, I realised that this can be represented simply, if I simply though about what it was I wanted to do, and the code came very easily after that [5].

Summary

Of course there are tradeoffs here, and learning function techniques is mind-bending (in a good way), you’ll also realise how little you actually know. By using a language like Scala you get to have your cake (e.g. great APIs, type inferencing, DSLs, etc.) and eat it too (static typing). You also get great productivity improvements. Am I willing to put a number on it, no, I don’t have enough experience yet, but I can see it has the potential to be large.

This has been a bit of a rambling post, but my central premise is these two things; 1) Using functional techniques will let you be more productive in the general case (i.e. not just web apps) than Rails makes you for writing web apps; and 2) learning functional techniques will make you a better “Rails programmer”.

Footnotes

[1] To be fair, Josh is probably just throwing out the names of the latest languages that are causing a buzz, so Scala gets lumped into that. Still, this doesn’t make the statement true.[2] Functional programmers usually don’t like to use this term, but I’ll use it here for quick categorisation.[3] You also get other nice things like improved testability, increased maintainability, flexibility, etc.[4] I have come up against some of the barriers to making things even easier, for example using APIs that rely on side-effects and are not referentially transparent breaks your ability to compose functions, which at its least is really annoying, and its at worst has a huge impact on a project.[5] This code is not without its faults. It builds the stack, but is composable. A solution using iterators won’t build the stack and is more efficient, but is not composable, leading to code that is harder to reason about.

18 Responses to 'Rails 10x more productive, Scala 2x. Really?'

Subscribe to comments with RSS
or TrackBack to 'Rails 10x more productive, Scala 2x. Really?'.

“RubyÃ¢â‚¬â„¢s lack of a static type system also leads to issues, however I donÃ¢â‚¬â„¢t want to address these in this post.”

I am sorry but formulated like this way just asks for comments or complaints.

What is the specific, exact problem therein?
Which are your data nodes (or “types”) you have a problem with?

The reason I am asking back is that people continually raise this point – not just here, sometimes on the mailing list – but do not back it up with code.
It would be better if such complaints were formulated clearly WITH code, or not mentioned at all.

There are quite a few people around the traps who are prepared to take a stab at languages that require a solid mental discipline these days (let’s not mention names). One can only speculate at their motivation (consciously or not). I just throw them in the basket of “no idea what they are talking about” and so are easily dismissed.

C# is significantly more productive than Ruby. Scala is significantly more productive than C#. Your brief point about composition was the key one

> The reason I am asking back is that people continually raise this point – not just here, sometimes on the mailing list – but do not back it up with code.

I find it hard to understand how people can’t infer this given a moment’s thought.

Here’s a simple argument:

1. Ruby has no static checking.
2. Even the simplest static checking ensures the absence of typos or unknown fields.
3. Since Ruby does not check these basic errors before runtime, it forces the developer to write unit tests for even the simplest of co-operating classes.
4. These pointless unit tests hurts productivity, particularly when refactoring and large codebases, as renames and structure changes necessarily introduce naming mismatches. So you write the class, write the unit tests, rename a field, you rename the field in the unit tests, run the tests, fix breakages, then move on.

Any kind of statically checked language does not force a suite of unit tests for such trivial checks; these checks are performed automatically without developer intervention. So remove any instance of ‘unit test’ in the above steps, except ‘run the unit tests’, which consists merely of ‘compile program’.

What part of this argument is hard to grasp? And this is only the trivial application of static checking. There are more sophisticated uses to ensure safety, but this suffice to demonstrate the point.

Just as a note, the functional aspects of C# are actually influenced by Haskell, and less by F#, as one of the lead developers of C# is a Haskell user.

My favourite things about functional languages are that, in the hands of a good programmer, the code is practically bug free; and the concepts of fusion and map/reduce are very useful in terms of optimization and making code easily parallel; these are ideas that can easily be adopted in imperative programming languages.

Lack of a static type system causes problems? This is an issue that I expect most of the commenters here will argue about – but the truth is, 99% of programmers do not understand the issues (me included), and 90% of programmers assume that statically typed means that you have to have type annotations, forgetting that there is such a thing as type inference which is currently being heavily researched.

> The reason I am asking back is that people continually raise this point – not just here, sometimes on the mailing list – but do not back it up with code.

It doesn’t need code, the argument is trivial.

1. Ruby doesn’t perform any static checking.
2. Static checks at the very least rule out typos, missing fields, etc.
3. This forces Ruby developers to write and maintain silly unit tests to catch typos.
4. Given a large codebase one wishes to refactor, this requires the developer to perform a rename, update the unit test, run the unit tests, fix any breakages, then move on. Writing and maintaining these tests is a waste of time that hurts productivity.

With even trivial static checks, writing these silly unit tests, and maintaining is done automatically by the compiler.

The reason I am asking back is that people continually raise this point – not just here, sometimes on the mailing list – but do not back it up with code.

It doesn’t need code, the argument is trivial.

1. Ruby doesn’t perform any static checking.
2. Static checks at the very least rule out typos, missing fields, etc.
3. This forces Ruby developers to write and maintain silly unit tests to catch typos.
4. Given a large codebase one wishes to refactor, this requires the developer to perform a rename, update the unit test, run the unit tests, fix any breakages, then move on. Writing and maintaining these tests is a waste of time that hurts productivity.

With even trivial static checks, writing these silly unit tests, and maintaining is done automatically by the compiler.

“The reason I am asking back is that people continually raise this point – not just here, sometimes on the mailing list – but do not back it up with code.”

It doesn’t need code, the argument is trivial.

1. Ruby doesn’t perform any static checking.
2. Static checks at the very least rule out typos, missing fields, etc.
3. This forces Ruby developers to write and maintain silly unit tests to catch typos.
4. Given a large codebase one wishes to refactor, this requires the developer to perform a rename, update the unit test, run the unit tests, fix any breakages, then move on. Writing and maintaining these tests is a waste of time that hurts productivity.

With even trivial static checks, writing these silly unit tests, and maintaining is done automatically by the compiler.

In 1 and 2 you’re completely right, but and IDE can do that easily as it’s been shown by Smalltalk IDEs (and maybe Netbeans or Aptana but I haven’t used any of those two).

In 3 you’re thinking that Ruby programmers write unit tests to catch typos, which only shows that you don’t know what’s unit testing about. The basic principle is that you’re testing the behavior of your application and you’ll do it regardless of the type system (remember, Java and .Net are static but yet people use JUnit/NUnit).

When you test the *behavior* of your program in most cases you’ll be implicitly checking the types and you’ll be covering the parts that might include typos, so that really does not matter. When you call function ‘foo’ you don’t really test the types/typos of the code inside, you test the function as a whole.

It’s funny what you mention on 4, a quote from Martin Fowler on the “Refactoring” book: “If you want to refactor, the essential precondition is having solid tests”. And he uses Java and JUnit for examples.

Also, a refactoring usually does not involve you changing the public interface of your code, so if you wrote good tests you can achieve a 100% coverage _without_ testing specific methods (as opposed to an interface). In this case you could go on renaming whatever you want and you wouldn’t have to change the tests.

If you’re modifying the public behavior (like names) of your app then you’ll have to change your tests in Ruby, Scala, Java or COBOL.

The reason people don’t grasp that is because #3 is something you just made up out of thin air. Dynamic language programmers don’t write unit tests to solve the same problems compilers catch. Tests are written to verify the intended functionality. The type of errors that are typically caught by the compiler in, say, Java show up just as quickly at runtime in the vast majority of cases.

The fact is that when people write software, they don’t just pass parameters around willy nilly. It’s usually very simple to infer the type of any variable at any time. So why should we have to declare the type of everything if it’s intuitively obvious? And more importantly why should we build baroque structures to allow us some sort of multiple-type flexibility when what we want to do is conceptually simple?

Now it should be clear that what I’m talking about here is Java. I don’t have anything against static typing in general because I’m not familiar with enough languages. I know that static typing per se doesn’t need to preclude doing many of the simple and useful things we do in Ruby that are painful and obtuse in Java.

Regarding #3: Unless you have 100% code coverage, which nobody does, your unit tests won’t necessarily catch all simple syntax errors. I do a lot of python coding, and one of most annoying things is silly failures due to typos that could’ve been easily caught by a compiler.

I suggest giving statically-typed type-inferring languages a try–they don’t force you to make up anything baroque. OCaml and Scala are both good examples.

It appears you don’t understand the type-checking or no type-checking issue very well and sound like a type-checking fanboy mouthing off.

One problem with type-checking is in most languages it requires more code. As code size increases so can the bug count. That is, type-checking can actually increase bugs.

Another problem is type-checking catches a relatively small class of errors (in most languages usually doesn’t even catch all type-related errors) that are really quite trivial. Serious bugs would be the ones the type checking doesn’t catch.

“One problem with type-checking is in most languages it requires more code”

That’s a problem of those languages, not of type checking. Read about type inference, use Scala, Haskell, Ocaml, hell even C# has a bit of inference if you want to start gently. You can’t write off all type checking as useless if you won’t look further than Java. Well you can, but you won’t look sane.

@markus I’m sorry if my tone caused contention, it was written haphazardly across the course of the day and the static types argument was not one I wished to pursue in this post.

“What is the specific, exact problem therein? Which are your data nodes (or Ã¢â‚¬Å“typesÃ¢â‚¬Â) you have a problem with?”

I will answer briefly here, and will also try to post about it soon.

My problem basically boils down to it’s hard to understand what a piece of code is doing without knowing the types. In Ruby, before you can reason about what some code is doing, you need to reverse engineer the types, often by looking at calling code, and trying to deduce parameters, etc.

Take for example, a parameter to a function called tags. What type is it? Who knows? Well calling code might, but throw in a few Rails plugins and try and work it out.

Sometimes “tags” are instances of the Tag class and sometimes they’re strings. Which is it in this case? I could obviously whack some debugging code in there to find out, but that’s a massive waste of time and completely inefficient. Now if this code was well isolated and tested it may be easier to deduce.

I’ve heard it said that dynamic typing gives you extra freedom, which static typing prevents. However I’ve been writing Ruby professionally now for about 8 months (longer on a non-full time basis) and I’ve only really experienced this benefit once. While this was a nice thing, it didn’t buy me more than about 30 seconds of time over what a static language would offer. And the downsides are significant.

The idea that the extra code involved with static typing increases bugs because more lines of code means more bugs is logically spurious and doesnÃ‚Â´t pass muster for anybody doing production work with both dynamic and statically typed languages.

What previous posters are trying to explain is that static typing is a whole raft of unit tests provided to the developer by the compiler for very little cost… This is a good thing… The majority of bugs occuring are the stupid ones that the moment they are pointed out to the developer have them blushing… Inappropriate type usage falls into this bracket.

Also mentioned, knowing (or discovering) what an object can do is essential for any platform catering to an OO paradigm. ItÃ‚Â´s not optional. Either that knowledge is implicit in what the developer knows, or explicit in either documentaion or declaration… Checking that you can do what youÃ‚Â´re trying to do to an object at compile time is a good thing. Again, it eliminates a lot of common errors that even the most gifted programmer can make.

Now people could come in with valid arguments from a variety of language camps as to why outside an OO paradigm different approaches may be safer or more productive… Ruby isnÃ‚Â´t one of those camp. ItÃ‚Â´s a relatively staright forward, multi-purpose, multi-paradigm but largely OO mainstream langauge. It really isnÃ‚Â´t that funky or special, and certainly is not special enough to be offering enough to dodge the benefits of static typing.

If youÃ‚Â´re a loan hacker coding your dream for the next dot-com giant web service, then use whatever you want however you want. If you plan on participating in projects that span years, with many people you have to consider the maintainance of a large codebase by lots of people… Static typing starts to look really compelling then… Yes you can do it, and do it very well in Ruby, but you canÃ‚Â´t brush aside the cost of not having static typing as being of no consequence.

If IÃ‚Â´m breaking an interface that another developer is relying on in a completely different part of the codebase that IÃ‚Â´ve never even been in, IÃ‚Â´d prefer it if the compiler told me… Yes I shouldnÃ‚Â´t be modifying an interface in a breaking way, but maybe IÃ‚Â´m only mediocre, maybe IÃ‚Â´m tired after a tough week, maybe I have a hangover.

I know how many stupid mistakes I make day-to-day, hour by hour. I know how many stupid mistakes I make, because the compiler warns me.