A useless blog from, and for, useless developers

“I’d love to improve this design, but I just haven’t got the time”
“I really want to tidy this up a bit more, but it’s not in the scope of this task”
“We don’t have time to re-think the architecture of this module right now”

Every developer has heard those phrases a hundred times. Hell, every developer has said those phrases a hundred times.
And, as much as we hate to admit it – many times it’s the right thing to do.

I would’ve loved to be payed to deliver elegant, beautiful code, but the reality is, my employer pays me to deliver functionality that is useful for them and for their customers; AKA Value.
Focusing on value delivered to the customer is the biggest emphasis of modern tech businesses, becoming even more prominent following the popularity of Eric Reis’s “The Lean startup”, and the whole “Tech product development” school of thought it inspired.

However, we all (including Reis & friends) also recognize that focusing entirely on customer-facing functionality is a mistake; This is where you get into the oh-so-familiar “tech debt” zone, and then “tech bankruptcy”, leading to the dreaded, and wasteful, admission of failure – the “complete re-write”.

Therefore, we can generally agree that, as software developers, Maintenance, i.e keeping the codebase in a healthy state, is an important part of our job.

But if we all recognize that keeping a healthy codebase is an important part of our job, how do we so often find ourselves with our “value” dials at 100%, but our “maintenance” dials at (nearly) 0%?
We know that maintenance is a must. Our tech lead knows it. Our CTO talks about it, sometimes even the CEO is familiar with the concept of “tech debt” and how terrified the techies are of it.
Yet, on the ground, priorities remain the same. “I don’t have time to fix it right now ever”.

Is there a legitimate reason?

The most popular explanation is the “Big Bad Product manager” theory: I don’t get to do technical maintenance because the evil business person keeps pushing features on me.
I find this explanation insufficient, for two reasons:

Firstly, contrary to how we like to portray them, product managers are reasonable human beings. They, too, know and fear tech debt, and they want to avoid it.
And even if they don’t, keeping the technical side of the business healthy is what your tech leadership team is there for.

Secondly, as we observed above – maintenance is part of your job.

Since when do you have to ask for permission to do your job?

A doctor won’t forgo thorough examinations and tests because “we promised the client we’d be healthy by the end of the quarter”.
A professional contractor won’t start laying down foundations before she’s verified that the ground can sustain the structure she’s building.
So why is it different in software?

A lot of the reason lies in the extreme immaturity of our industry.
We don’t have enough professional standards in place to make code health a non-negotiable part of the process, like safety checks in construction, or sanitization in health services.
(This, in turn, is because haven’t demonstrated enough professional ability to justify such standards; i.e – We took the time to ‘repay tech debt’ in the last project, but it sill turned out crap. So how can we credibly demand to do the same thing in the next project?)

(Of course, this observation will not be news to anyone who listened to what Uncle Bob Martin, and many others, have been saying for years now)

But I believe that this not the whole story – it’s not just that we’re not good (i.e professional) enough to do maintenance properly. Part of the problem is that we don’t even try.
(Remember – it’s “I didn’t do it”, it’s not “I tried, but failed to do it”)

Why don’t we even try to do maintenance?

Let’s consider your typical developer. In any given day, they have the choice to deliver either value or maintenance. We know that they (nearly) always choose the former over the latter. Even though their tech lead, CTO, colleagues, all talk about the horrors of tech debt every single day.
Why is that?

How many times have you (or a colleague) been praised for delivering a feature that was very beneficial to your clients?
Now, compare that to how many times you (or a colleague) have been praised for refactoring, maintaining, or adding technical documentation to your project?

Or, from a different angle – what do you think would be the consequences for you, personally, if you fail to deliver one, out of 100 planned features, to your client?
Compare that to what you think the consequences would be for you, personally, if you fail to improve / maintain the codebase in 100 out of 100 opportunities.

If your workplace is even remotely similar to any workplace I’ve ever seen, you know that delivering excellent maintenance might gain you the gratitude of your coworkers, but delivering features will gain you a promotion.
Which would you pick?

This conclusion is hardly surprising or novel; if there’s one thing I know about market economy (and there is only one thing I actually know), is that people will optimize their behaviour based on the incentives presented to them.
Pay developers by line of code? You get the same functionality written x10 more verbosely.
A bonus for every bug fixed? You get bugs planted in the application, so that they can be fixed later.
No tangible reward at all for maintenance? You get tech debt.

Walk the walk, don’t talk the talk

How can we move the ratio for developers away from 100% value – 0% maintenance?
Simple – managers need to put their money where their mouth is;
Stop talking about wanting to tackle tech debt. Instead, show that you’re willing to pay people to tackle it.
(Balance is important here; while we don’t want the split between value and maintenance to be 100/0, a 50/50 split is also unreasonable)

How about having maintenance-related line items as part of your yearly review process? As part of the developer role specs?
Why not, for every 10 times you ask “how’s such-and-such feature coming along?”, ask “how have you improved the code lately?” once?

Once people understand that there’s an incentive for them to keep the codebase healthy, that their promotion, raise, status within the organization, is affected by it, they will find the time, scope, and energy, to get it done.

(*Huge caveat: To effectively judge whether someone is achieving a certain goal, you need to be able to measure it. Measuring code quality, or the amount of improvement of code quality, is not a solved problem by any means. It’s a subject worthy of a blog post (or book, or academic research) of its own.)

What do you think? Does this description map to your own experiences? Is there a legitimate reason why we’re prevented from paying back tech debt that I’ve missed? Or maybe this isn’t actually a problem at all? Let me know below.

It’s a daily occurrence when adding features to a system –
we currently have functionality X implemented, but now we also need functionality X’, which is just slightly different.
Re-implementing most of X with some modifications is code duplication, and this is badTM.
We need to keep our code DRY, which means that both X and X’ need to share some code.

For many, especially in certain communities (ruby), the first only solution they’d consider is:
“Let’s make X’ inherit from X”
“That way, we can change some of the functionality in X’ by overriding some methods / properties”.

This isn’t always the best solution.
Let’s look at the (over-simplified) example of this service, which handles the process of creating a new user record:

This is where sirens should start flashing: AdminUserCreator has only private methods and no public interface?? What’s the justification for that?
Seeing this unusual situation alerts us that our design may be lacking.

Looking more critically at our solution, we can spot a few issues:

Single responsibility is broken

As we all remember, single responsibility means that an object has only one reason to change.
In our case, UserCreator should only change if we change the way we create users.
But, we just made quite a lot of changes to UserCreator to accommodate the way we create admin users.
Meaning: UserCreator has two reasons to change.

Encapsulation is broken

As we can clearly see above – UserCreator is accessing private members of AdminUserCreator . One class interacting with another class’s private methods isn’t an ideal situation.
Worse – the other way around is also true – AdminUserCreator can also access the private parts of UserCreator!

Which leads us to the next point:

Interfaces are broken

Take this exaggerated example: the welcome emails for the two types of users are completely different, but the email title is similar. What would that look like?

class UserCreator
# our public methods
private
def send_welcome_email(user)
title = get_title(user)
# more email generation...
end
def get_title(user)
friends_stats = get_friends_stats(user)
return "Welcome #{user.name}! you have joined #{friends_stats.number_of_friends} of your friends who were active #{friends_stats.activity_last_day} times today"
end
end

Confused? So are we.
It’s impossible to understand what a single method does without looking at both classes, and having to guess, at each method call, what actually is being executed.

Look at AdminUserCreator again – it only has some floating methods without any context that are impossible to understand (you can’t tell when or how get_friends_stats is called, for example, because it isn’t called in the class you’re looking at!)

(And just imagine what would happen if we added SuperUserCreator to the mix!)

The issue here stems from the fact that we have two classes that interact with each other, but it’s impossible to know when these interactions take place:

The interfaces between these two classes are inferred (based on whether a certain method has been overriden or not) rather than explicit (based on defining, at the call site, the object on which the method is executed).

Single responsibility is broken

Wait, didn’t we just talk about this..?
Well, it turns out, we can break this principle twice:

Since the child class has no choice but to inherit all the implementation (and therefore – responsibilities) of the parent class, if we add anything to it, it now has multiple responsibilities – its parent’s, and its own.

Imagine that, when an admin user is created, we want to take some more actions – send an email notification to other admin users, create a special log entry to document it, etc.

Because of the way the parent and child classes are all jumbled together, the child class now has more responsibilities than we would want, and we can’t easily extract them, because another class, and a completely different functionality, is involved.

Duplicating responsibilities also leads to –

Test code is broken

Two classes having the same functionality means that we have to have the same set of tests for both of them, to verify this functionality.
This causes either code duplication, or having to share test code (sharing test code in an easy to understand way is something that I find very difficult to do).
Additionally, the differences between the parent and child classes are in their private methods. That can be sometimes awkward to test.

What’s the alternative?

Prefer composition over inheritance. Meaning – compose the different responsibilities using a third, collaborating object that’ll take on the shared responsibilities, make it configurable as needed, and have both implementations use it.

In our example we can consider two such collaborators:

class UserEmailTitleGenerator
def get_title(user, friends_stats)
"Welcome #{user.name}! you have joined #{friends_stats.number_of_friends} of your friends who were active #{friends_stats.activity_last_day} times today"
end
end

..And it’s that stupidly simple. Instead of configuring the two classes differently by overriding methods and attributes, we configure some lower-level services by providing different arguments.
(*in fact, the above example is so simplistic that it can easily be solved by a single class using inversion of control; I’ll leave that as an exercise to the reader)

This approach avoids the aforementioned disadvantages –

It keeps true to SRP – since the two Creator classes are completely unrelated, and can only change independently.

It maintains very clear interfaces – for each method call, we know exactly what method is executed, and against which object, and with which parameters.

Single responsibility is, again, maintained – because no class needs to assume responsibilities that are not strictly its own.
In fact, this approach helps us spread the responsibilities in even finer-grained detail; we can have a class for just creating the user record, a class just for the email generation, etc..

Since common responsibilities are now extracted to a completely separate class, they can be tested only on that class (i.e creation of the user record can be tested only for the UserRecordCreator class, and not for UserCreator or AdminUserCreator.)

There are no free lunches, of course, and these advantages carry with them a new disadvantage –
we’ve now broken up our functionality into many small pieces. These pieces may be too small, and having many small pieces can be very very confusing.

Summary (or: are you telling me that I should never use inheritance?)

No.

I’m telling you that inheritance is one tool in your toolbelt, and it has its uses. But it isn’t the right tool for every job.

Like almost any design decision we make, it all comes down to tradeoffs. Each approach has its own pros and cons.
My thinking is that, even if inheritance doesn’t immediately show all of the cons listed above, starting down that path makes it hard for me to change course and start breaking things up, because by nature inheritance couples my classes quite tightly together.

That’s why I’d usually use inheritance only in these very specific cases:

Is-a relationship: Like in the classic examples we’ve all seen in school – a car is a vehicle, because it has the same properties (move, make_noise), but some different implementation, or maybe some additional properties / methods.
(meaning – it needs to meet Liskov’s substitution principle – any code that expects an instance of parent can equally use an instance of child).
Note that this narrow definition is almost always restricted to domain classes only;
While it’s easy to imagine that a triangle is a shape, it’s hard to say that “monthly report controller” is a “report controller”.

Well-known design patterns:Severaldesignpatterns utilize inheritance. I’m very comfortable using inheritance in these cases because it’s very clear what’s going on and how the different classes interact (because the patterns are well defined)

So it’s been nearly a year since my rant on rails. In this year I’ve been working on a rather large rails application, trying to help the team tame this 100Ks LOC monolith.

We’ve learned a lot about what wasn’t working for us with the out-of-the-box rails architecture, and how to try to work around that.

In this (and subsequent) posts, I’ll try to describe the issues we’ve faced, and our ways of trying to improve things.
Let’s start with one of the most common issues rails applications face:

Fat Controllers

Too much responsibility in controllers is a very easy pit to fall into in rails application.
I believe that this is because controllers come pre-loaded with so much rails magic and functionality, that it takes very little to get them to be doing ‘too much’.

We all know that controllers should only concern themselves with receiving data from the client and passing data to the views. Like so:

class SkinnyController < ApplicationController
def create
result = ...# do something with params[:some_model]
if result
@entity_to_show = # get whatever it is we want to show
@another_param_for_the_view = ...
render 'some/partial'
else
render 'something/else'
end
end
end

Things in real life, however, don’t always look like the ‘hello world’ example from RailsGuides.
In real life, we have requirements such as:
1. when an Order is purchased, a payment is taken, the nearest warehouse that has all of the OrderLine items is notified, and an email is sent to the client.

2. the user shouldn’t have the option to pay for the Order if any of the OrderLine items is not currently in stock.

Where do we put all of this logic?

Traditionally, developers would recognize that taking payment and sending emails aren’t domain model responsibilities.
Additionally, signalling to the view which options should be shown, seems to be appropriate for the controller.
So, the first solution most of us would come up with is sticking all of the above logic in the controller.

This is of course a BadTM thing.
Firstly, because It puts business logic into the controller, which couples business rules and presentation (Bad!).

Additionally, a point that isn’t immediately evident regarding controllers, is that they’re harder to test than other types of classes.
Therefore, we would like them to have as little logic as possible, because any logic that they contain would be difficult to validate.

The reason why controllers are hard to test is that they don’t have a very well defined API – they just send some messages to a bunch of models, and then render something.

Testing interaction with model classes

Requires us to either:
1. stub them (discouraged for non-statically typed languages, since it’s much harder to make sure the stub’s interface matches the actual class’s interface:expect(card).to receive(:pay).with(amount, cvc) might pass, even if the pay method actually takes the cvc as a first argument, and amount as the second)
Or
2. read objects from the database and verify them (which poses 2 problems – 1. performance, 2. We’re now testing the model classes, which is not the point of those tests),

This quickly gets very messy the more logic you add on.

Testing that the correct data is rendered

Requires us to either:
1. test the value of the @show_payment_option? instance variable (Problematic – we don’t want to test private attributes)
Or,
2. test the rendered view (Problematic – coupling our controller test with the partial implementation, and introducing more complexity by having to test generated HTML)

For the above reasons – the controllers’ high coupling with the view, and their reduced testability, rails devs are commonly told to aspire to have“Skinny controllers, fat models”
which is to say – take as much responsibility away from the controllers, and put it into the models.

Obese models

take as much responsibility away from the controllers, and put it into the models.

As we saw in the previous section, the first part of this sentence is absolutely true; we don’t want a lot of responsibility in our controllers.

However, just blindly sticking all of it into the model classes isn’t great either.

Let’s go back to the 2 examples from before:
1. the system needs to take payment, notify a warehouse and send an email when an Order is purchased,

2. the user shouldn’t have the option to pay for the Order if any of the OrderLine s is not currently in stock.

The first requirement obviously does not belong in a single model class; it concerns many model classes, and possibly some other services.

However, even the second, simpler requirement, is tricky.
At first glance, it makes sense to have a can_pay? method in the Order class.

But, how can the Order know whether a specific item is in stock? Does it need to fetch it from the database? query the Warehouse class? call the external StockManagement service?
Even the simpler case of simply fetching the Item from the database and querying it breaks the isolation of these two classes from each other.
The other described cases are even worse, of course.

Moreover, it’s expected that we would have many more requirements to take some action based on the state of the Order object:
Show the order number in red if there’s an unhandled Complaint attached to it and the order total is more than $100,
stop the order from being shipped if a previous order to the same address was not picked up,
and whatever else your friendly product manager can dream of.

Putting the logic for all of those different things into the model class would create an obese class, littered with has_pending_complaint? , has_previous_denied_shipment? etc., each of them, potentially, concerned with more than just the Order object itself.

So where does business logic, which doesn’t belong in a single Model class, go?

Domain services

This is a well-known concept in other technologies, however, it doesn’t immediately spring to mind when using rails, simply because it’s not a “thing” in rails.

The point of a domain service is to perform an operation which requires interaction with more than one domain object.
The domain service is still in the domain layer, like model classes.
However, it doesn’t hold any data of its own (and it’s not persisted), but rather it is concerned with coordinating several domain classes to represent a process, or a workflow. For example:

(For a more detailed description of domain services and their relation to DDD, see here)

Abstracting a business process into its own class / method helps us solve the issues we discussed previously:

It’s better than having the code in the controller, since now these business rules are removed from the presentation concerns, and have a well-defined output which is more easily testable than a view output.

It’s better than having the code in the model class, since it prevents one model class from being too familiar with other model classes, and stops model classes from implementing methods which aren’t directly related to the model realm (such as has_pending_complaint?)

The controller’s job can now be described as gathering the objects to pass to a domain service (either from the received params, or by reading them from the database), invoking the appropriate domain service, and acting on the result.

The model’s job can now be described as defining the structure and operations over a single domain object.

The Domain Service’s job is to achieve a business goal by using several domain objects / services.

Bonus: Query classes

Query classes are a subset of domain services; these are domain services that don’t cause any changes, and are only used to calculate a result.
Sort of a read-only domain service. (The name is taken, of course, from the Command-Query pattern).
The above-described use cases, such as has_previous_denied_shipment? and has_pending_complaint? can be abstracted into query classes in order to help keep the models from becoming obese.

Conclusion

Rails does not presume to offer all the tools and paradigms required for any application.
It gives a very good starting point for quick and simple applications.

As your application grows, however, the simple 3 concepts of an HTML-view, a controller and a model will struggle to contain all of the more complex logic you’ll need.

In the case of a processes, which require several model classes to cooperate, abstracting it into its own class will help keep your controller and model code singly-responsible and well-isolated.

This is an introduction to my angular starter project, explaining what it is, what it contains, and why it was built.

This project is a fork of angular-starter, changing it a little bit, and adding some more features and samples to it.

Why Angular 1.x?

It’s true that Angular 2 is now all the rage. However, I think that there are still valid reasons to be starting a new project with Angular 1.5:

Even though it seems like it’s been around forever, Angular 2 is still, currently, not released yet.
I’ve heard the angular team say that there are plenty of applications built on Angular 2 beta running in production, but there are many (including me) who wouldn’t use anything pre-release in production.

Even after it’s released, the resources and help you can get for Angular 1.x are much much better than for 2.0.
Even though the angular team has done a great job in churning out documentation for version 2, you’re still likely to struggle to find answers and non-trivial examples for angular 2.
This is especially relevant for those who want to create a non trivial angular app that’s more than just forms and CRUD; they will struggle to find examples and solutions for complex problems in angular 2.

There are many teams with existing skills in angular 1. For them, the learning curve of angular 2 might not be worth the switch (obviously, depending on the expected scale and lifetime of the project).

Why another startup project?

True, there are already many. However, I wasn’t able to find one that answers all of my requirements:

Modern (i.e using ES2015 and angular 1.5 with components)

Production-ready (webpack configuration for release)

Follows best practices

Non-trivial (provides working examples of more advanced use cases)

The original angular-starter project covers the first 2 points very well, however – I felt it could be improved in the latter 2 categories.

What it is

Modern

Obviously, this project uses the babel transpiler and webpack modules, which allows you to use ES2015 features , as well as angular 1.5, and SASS compilation.
It also contains example code for using angular’s new component .

Furthermore, it uses ui-router for routing, and ng-annotate for automatically injecting dependencies in a minification-safe manner.

this is currently the recommended way to write angular 1.5 apps (at least until next week, when something better comes along :p ), and I’ve yet to see a starter project that features examples for all of these.

Twitter bootstrap and FontAwesome have already been included, since they’re so widely used.
(If you’re not interested in using them, it’s as easy as simply removing them from the package.json and from your .scss files).

Production-ready

The project’s build script is already configured to concatenate, uglify, and minify your code, with cache busting, ready to be served in production.
The resources directory is copied as-is to the production build, so that’s the place to keep your resources (i.e images).

Follows best practices

Sadly, John Papa hasn’t updated his famous angular 1 style guide to include ES2015 or components.
Thankfully, Todd Motto has stepped up with his own style guide covering modern angular 1 apps. My project tries to follow that guide, especially module structure.

Linting: The project comes with linting using ESLint, and is pre-configured to use the recommended rules.
You’re encouraged to add or modify to them (in the .eslintrc file) as appropriate for your team.

Non-trivial

I found that many samples and starter kits don’t contain any examples of using dependencies, or testing any type of realistic scenario.

This is why I’ve added the home component, which uses a service as a dependency, and has a route that leads to it.
This allows you to see an example of routing to a component, and of using dependencies and the 'ngInject' directive.

In the tests for this component, you can see how to mock its dependencies using the $provide provider, and how to test using a mock that returns a promise.
I’m also using the inject function to obtain a reference to the $componentController, which is used to create instances of components that you can run tests against.

To understand more about mocking and testing, I recommend reading the jasmine docs.

Summary

I’ve tried to create a starting point that is ready to go.
The idea is to allow developers to clone this starter project, and immediately start writing their application code, without having to spend any time configuring frameworks, tools or dependencies.
I’m completely comfortable with taking this project and starting to build a production application with it (in fact, I have done this), and I hope that others will find it useful as well.

If you have any comments or ideas for improvements (hey, I’m just learning as I go along..), feel free to open an issue (or even better – a PR) on the repository.

I got to thinking recently about the relationship between a developer and his / her code.

I’ve come to the conclusion that this relationship is parental:Your code is, figuratively, your ‘baby’.

It’s something that you’ve created, and that you’re proud of. You’d do anything to protect it.

Before you dismiss this as crazy, new-age hippy talk, hear me out; I believe I’ve got a point here.

Like any scientific hypothesis, we don’t only expect it to explain some known phenomena (more on that later on), we also expect it to be able to illuminate and explain other, more subtle, behaviour.
Let’s try that.

So if we accept, as a working thesis, that we treat our code like our baby, it means that a bug found in our code is like being told that our baby has some sort of a horrible disease.
It’s shocking and heartbreaking news.

And, we would expect, that we respond to this kind of news with the appropriate feelings of grief and mourning. Let’s see.

Stages of grief

True story:
Earlier in my career, I was a team lead for a fairly large team (around 7 devs).
Me, and the developers who were more junior / new to the team in one office (so that they can get assistance from me easily), and a few more senior team members in the other room (free from the ‘pestering’ of the juniors).

One of our senior team members in the other room, let’s call him ‘Jerry’, was a super talented developer, although a bit young and rash at the time.
He’s a really stand-up guy, also with a great sense of humor, and not afraid to show his emotions.

I’ll try and transcribe here a typical chat log of banter between Jerry and me (yes, we used chat to communicate from the other room. After all, laziness is a virtue :)), that demonstrates quite nicely the different stages of grief Jerry went through whenever a bug was found:

Jerry> Finally! just finished and pushed the changes to the reports. Yay!Me> Cool man! well done. I’ll take a look in a second.

<5 minute pause while I check-out the code, fire up the system and play around with it>

Me> It doesn’t work.

Stage 1: Denial

In this stage individuals believe the diagnosis is somehow mistaken, and cling to a false, preferable reality.

Jerry> What do you mean? of course it works, I tested it.Me> Nope, I’m getting an exception here.Jerry> Are you sure you have the correct branch checked out?Me> Yes, I’m sure; I can see the new button you’ve added here. And I’m getting an exception.Jerry> That’s impossible! Show me!

Stage 2: Anger

When the individual recognizes that denial cannot continue, they become frustrated.
Certain responses in this phase would be: “How can this happen to me?”; ‘”Who is to blame?”

Jerry(storming into my office, destroying everything in his wake, throwing terror into the hearts of the more junior developers): How can you be getting an exception??!? I’m telling you I thoroughly tested this code!!Me (showing on screen): See, if you select a filter before clicking ‘generate report’, the whole thing crashes.Jerry: Oh, filtering! I forgot about filtering!! Why do I always get these stupid edge cases??!!

<1 minute pause as Jerry contemplates the exception on the screen>

Jerry: A-ha!! I’ve GOT it!
I’ll tell you what – It’s that idiot <insert name of a developer who left the company>’s fault! He did such a lousy job with the filtering, no wonder it crashes!

Stage 3: Bargaining

The third stage involves the hope that the individual can avoid a cause of grief.
People can bargain or seek compromise.

Jerry: Wait a minute, hang on! But why would anyone use a filter before generating a report?? That doesn’t make any sense! They would never do that! Nobody does that. Why would they do that???Me: Well, it looks like a pretty legitimate workflow to me…Jerry: Look, I know Morty and Helen (our main customers and power users). They never filter a report before generating it.
In fact, they never even filter a report at all! I bet you that no-one even uses this stupid feature anyway!
You know how this filtering thing came about? It’s because Helen was able to do filtering in Excel, so she thought that our system must be able to do it too.
But they’ve never used it since we developed it!
Why don’t we just remove the filtering option here, and everything will be ok?Me: Jerry, we’re not going to remove this feature.

Stage 4: Depression

“I’m so sad, why bother with anything?”. During the fourth stage, the individual despairs.
In this state, they may become silent, mournful and sullen.

Jerry ( miserably retreating back to his office): Fine, I’ll fix it…

Jerry (indistinctly, from the other room, talking to a colleague): You know what the problem was? It was <developer who left>’s stupid filtering! And now I have to fix it! What’s the point of developing new features if I have to keep going back to fix someone else’s mess??

Stage 5: Acceptance

<30 minutes of debugging>

Jerry (Clearly heard screaming from the other room): OOOH!! NOW I’ve GOT it! Of course!!! What an idiot I’ve been!

<1 minute pause>

Jerry> Yeah, you were right, I forgot to check the filter values before generating a report.
Anyway, I fixed it now, it’s all good.Me> Cool man! well done. I’ll take a look in a second.

<5 minute pause while I check-out the code, fire up the system and play around with it>

Me> It doesn’t work.

etc.. etc..

So we can see clearly here that Jerry, indeed, treats his code in a very personal, emotional way.
And, although you yourselves don’t usually get genuinely angry when your code doesn’t work, or desperately try to redefine a bug as a feature, can you not see a little of yourselves, or your colleagues, in Jerry?

What else?

As I mentioned earlier, this theory can explain a few other strange ways in which developers relate to their code:

Response to criticism –
As developers, we pride ourselves on being rational creatures;
If there’s a better, more efficient way of doing something than what we’re currently doing, we would be happy to learn and improve. Right?This is where any of you who’s ever worked on a development team goes “Yeah, right…”. It’s never like that.You always have to fight tooth and nail with someone to get them to change their code, long after you’ve proven to them that your suggestion is better.
And of course, a day later, when they make a suggestion to you, they have to do the same thing.

This is because that, even though you know that your child isn’t perfect, you’d be damned if you’re going to let some stranger tell you that!

That baby is your pride and joy, and it doesn’t need any fixing.
It’s beautiful just the way god you created it.

Developers can’t find bugs in their own code –
How many times have you grappled with some code that mysteriously just won’t work, unable to make any progress, until a colleague passes by your desk and says “oh, you know you forgot to insert a trailing ‘;’ there, right?”. You were looking at it all that time, yet you just couldn’t see it.Researches have found that code review is an effective way to find bugs.
Even those who disagree note that 1 out of 6 review comments indicates a possible bug. That’s a lot.

So how come someone, who has less intimate knowledge of the code than I do, can find a bug by looking at it for 5 minutes, which I haven’t found looking at my own code for 5 hours?

This is because, to me, my baby is perfect. I’m so in love with it, I’m just not able to see any of its flaws.

“Yes, it’s quite breathtaking”

As a side note, this is one of the justified reasons that DHH (creator of the Rails framework) has his knickers in a bunch over TDD; You can’t expect a programmer to test their own code effectively.

Afterword (or, “so what?”)

So was Jerry a bad developer who kept pushing bugs into the codebase?
Not in the slightest. He was one of the best I’ve worked with, not the least because he had the great showmanship and sense of humor to act out this wonderful show almost on a daily basis.

Since then, I haven’t seen any developer act out their grief in such an obvious way. But if you know what to watch for – I bet you can see it with every developer, not the least in yourselves.

And this is, I guess, the moral of this whole story – if you can recognize this behaviour, you can take steps to avoid it, or counter it.

Whenever I get a comment or a suggestion regarding my code, I’m mindful that I’m inherently biased, and try to balance accordingly.
I found this realization has helped me considerably in accepting suggestions and improving my skills.

Like another great developer friend of mine once told me:

“If you don’t look at code that you’ve written six months ago and think ‘What is this crap? How could I have been so stupid?’, it means you haven’t improved.”

Last month, JS Monthly London‘s host Guy Nesher gave a talk titled “JS interviews”.
The talk contained good explanations and examples of what are hoisting, closure, variable scopes, and other javascript gotchas that are so common in technical job interviews.

Guy stated that as “normal” developers, working in angularjs / react / backbone, we never really need to use things like closures.
If we use a linter / strict mode (or, honestly, just some common sense), hoisting is not something we’re going to encounter, and in general – all of that ‘advanced’ stuff – prototypes, apply, bind.. – that’s just stuff you need to know for your interview, and then you can forget about it and go actually do your job.

Guy likened this to a carpenter being asked at a job interview whether he can change a lock (no, I’m not a locksmith), or his opinions regarding some abstract wood-manufacturing techniques (I don’t care, I just cut the wood).

The underlying assumption was that“a deeper understanding of Javascript is expected (but rarely used)”(complete slides from the talk can be found here, courtesy of Guy)

Now, Guy is not your average developer, having spent years in law before making the switch to software, so it’s understandable that he has some different views.

And when I say ‘different’, I mean ‘bloody infuriating’.

You can’t be a driver without being able to change tyres

If you’re anything like me, you probably have steam coming out of your ears by now.
How dare this guy claim that you can be a competent developer without a solid understanding of the technology you’re using, of general software engineering and CS concepts such as SOLID, data structures, performance..?
What the hell does he take us for, some mindless code monkeys?

Of course you need a deep understanding of the technology and concepts behind the code you’re writing, otherwise you wouldn’t be able to understand why your code behaves or performs in a certain way, and you wouldn’t be able to debug it in certain situations!
You also wouldn’t be able to apply well-known solutions / patterns where appropriate, or understand the cost / benefit of using one technology over another.
That’s obvious!

So at the Q&A portion of the talk I posed Guy with the following question-
“Suppose you were interviewing me for a position, not as a carpenter, but as a driver.
You would ask me ‘do you know how to drive a car?’, and I would answer ‘Of course! Right pedal is to go, left pedal is to stop, and you control the direction with the wheel.’

‘Great’, you would say, ‘And suppose that, while you’re driving along, you get a flat tyre. Do you know how to handle that?’
‘I don’t really know car engineering in depth.. all I know is Right pedal is to go, left pedal is to stop…’.
Would you hire me? I can still get your car from point A to point B.
However, if anything goes wrong, I’ll be stuck.”

“But software development isn’t something you do alone, like driving.” Was Guy’s answer.
“In software development, you’ll have a senior developer on the team who’d know how to ‘change tyres’. A kind of a ‘pit crew mechanic’.
But the rest of the time, the ‘junior driver’ is going to be cruising along just fine.”

Are we all such special snowflakes like we’d like to think?

Guy’s answer lead me to do some thinking.

Doesn’t a lot of what I do, day-to-day, consists of ‘more of the same’?
CRUD over a database, some validations, showing some aggregation to the user…
In front-end development it’s even easier to spot:
Some form, an AJAX call to a server API, displaying data.. there isn’t even any business logic involved (hopefully).
A lot of the routine is… well, routine.

I don’t use any ‘special’ or ‘deep’ knowledge when I do the above things.
Not all of my time is spent innovating or ‘Engineering’.
A lot of what I do actually isn’t that special.Especially if I’m using very high-level languages (ruby, JS), especially when using frameworks on top of those languages (rails / angularJS) to abstract away the ‘scary’ SQL or network operations.

So maybe Guy is right? for the 80% of routine work, you don’t need to hire a ninja rockstar hacker (or whatever the stupid buzzword du jour is);
Just have one senior guy within a team that can handle stuff like architecture, coding standards, tech evaluations, helping the more junior members when they’re stuck, and let the others get on with the day-to-day.

Bootcamps and the rise of the junior developer

The notion presented by Guy ties in very nicely with recent trends in the professional software world.

As demand for software developers is expected to rise at a “Much faster than average” rate, universities don’t produce new graduate at an increasing rate.
This gives rise to “code bootcamps“: intensive, 6-24 week programs, designed to bring you from zero to web-developer hero (or, more precisely, to junior web-developer).

This trend, if continues, means that we’ll be seeing more and more developers who, like Guy, have never studied algorithms, data structures, or anything else that might be ‘under the hood’ in day-to-day work.

It will be extremely interesting to follow these developers over the next few years, and see how well they’re able to make the transition from junior to mid to senior, and what are the differences in their performance compared to the other two large groups of developers – the formally educated and the self-taught.

What do I do with all these juniors?

Whether we like it or not, it seems that Guy’s (and the bootcamps’) vision is here to stay – more and more developers with little or no ‘deep’ knowledge in programming will be joining the workforce in the coming years.

Numbers alone dictate that – there is, and will be, a huge demand for developers, inevitably leading to lowering the entry barriers for newcomers, and making experienced developers that much more expensive.

That means that it’s very possible that you will end up on a team that is some sort of variation of what Guy has described – a few ‘drivers’, with one or two ‘pit stop mechanics’ to help them along.

It seems that the thing to do right now, instead of looking at these people down our collective noses, is to come up with an effective method for integrating and mentoring these newcomers.
Whether they’re interested in eventually becoming ‘mechanics’, or are content to just stay ‘drivers’, they’ll need our help.

I’ve personally been part of a couple of teams which included bootcamp graduates.
And, of course, I was once a junior myself.

In all these situations, I never thought the team had enough awareness for the need of junior developers to be mentored:
The assumption of the team was that after a suitable period of training, these developers are ‘ready’, and were therefore thrown into the deep end and were treated as any other team member.

In my opinion, mentoring / training of junior developers is better as a sustained, consistent process, as opposed to a ‘one and done’ job.

Here’s my $0.02:

Juniors need to receive feedback and advice on their work often, and over a long period of time.
Instilling ideas and ways of thinking is a lengthy process.

Teams need to understand that having junior developers on the team doesn’t only mean that they (juniors) will be performing slower, due to their inexperience.
It also means that more senior team members will be performing slower, due to the fact that they also need to be assisting their team members.

Having different skill levels should be reflected in the work being done by team members – some tasks are more complex, or require greater knowledge and experience, so they shouldn’t be done by a junior.

It needs to be official: Managers need to let team members know that guiding / being guided is part of their jobs descriptions.
This will help to avoid friction from juniors who are perhaps too ‘proud’ to accept guidance, and from seniors who can’t be bothered to guide.

Making it official will also guarantee that the subject of training new team members is not forgotten or abandoned as projects and deadlines get more hectic.

Conclusion

This post began with a question – do you need to be intimately and deeply familiar with the tools that you’re using in order to be an effective developer?
The answer, in my opinion, is “No, but up to a point”.

You can be extremely productive in a lot of scenarios, not having a broad knowledge base.
If you want to progress beyond the ‘junior’ label, however, I think you need to expand your knowledge.

However, regardless of my, or anyone else’s, opinion on such developers, the reality is that we’re going to be seeing a lot more of that type of devs in the coming years;
Rising demand for developers, coupled with the rise of the “bootcamp” concept, mean that these junior developers are going to be coming into your team.

The question then becomes how to utilise these developers in order to produce the best quality (and quantity) of work?

People have been trying to answer this question for a while now.
However, it seems that the assumption is always that the individual is responsible for her own training, or, at most – that training is something that’s internal to the team.

I don’t think this is enough; companies need to understand that the success of their projects and their organization is dependent on the success of the juniors.
Therefore, there has to be a management commitment to making these people successful.

Training and supervising these guys takes time and resources, from both the junior and senior members of the team.
It also requires a slightly different work process where junior members are involved.

Also, how will this play out with other factors like the high pace of the industry, and the relatively high turnover rate in our field?
Will a company that needs a project done today be willing to invest in training an employee who might not be there tomorrow?

Edit 2016-04-08: Lessons learned after 48 hours

So I got more feedback to this than I anticipated, mainly via reddit.
Some in the predictable “you’re ugly” form, but a lot of genuinely good suggestions on how to avoid / go around some of the pain points I describe in this post.

I think the main thing was that several people pointed out that it’s completely possible to have your models as POROs, and then use ActiveRecord (or alternatively, something like Sequel) for data access only.

To be perfectly honest, I’m not 100% sure what that would look like, and whether it would be as convenient as I’d like, but it would definitely be an improvement over sticking everything in an ActiveRecord class.

This solution would also greatly help with the issue I have with unit testing, enabling me to decouple those from the database, and definitely cut down on their runtime.

In conclusion – I wouldn’t say that rails is terrible / should not be used.
I would definitely say that it has some big traps which are harder to avoid than I’d like.

If I had to re-write this post today, I’d definitely change its title to “Some of Rails’s biggest gotchas”, and I wouldn’t say that you should categorically not use it.
You just need to use it carefully.

Thanks for reading, and for teaching me quite a few new things.

Here’s the original post:

Lately I’ve had the chance to work on a large-ish server application written in RoR. I went from “Wow look at all these cool conventions, I know exactly where everything needs to go!” to “err.. how the fuck do I scale this?” and “This is not where this should go!” in 8 weeks. And this is (partly) why:

(* Yes, this post’s title is a total clickbait. I don’t actually hate Rails; I just think it promotes a lot of bad practices.)

ActiveRecord sucks

I’ve never personally used the ActiveRecord pattern in previous projects; I always felt this would mix up domain concerns with persistence concerns, and create a bit of a mess.
Well guess what, I was right.

In the specific project I was working on, the code for domain classes would typically consist of 70% business logic, and 30% stuff to do with DB access (scopes, usually, as well as querying / fetching strategies).
That in itself is a pretty big warning sign that one class is doing too much.

The arguments as to why ActiveRecord in general is a bad idea are well documented; I’ll briefly recap here:

A model class’s responsibility is to encapsulate business rules and logic.
It’s not responsible for communicating with data storage.

As I mentioned before, a considerable amount of code in our project’s domain classes was dedicated to things like querying, which are not business logic.
This causes domain classes to be bloated, and hard to decouple.

Have you ever, while debugging, listed the methods of one of your domain objects? Were you able to find the ones that you defined yourself? I wasn’t.
Because they’re buried somewhere underneath endless ActiveRecord::Base methods such as find_by , save , save! , save?, save!!!, and save?!.

Well, I made up a few there, but ActiveRecord::Base instances have over 100 methods, most of them public.

ISP tells us that we should aspire to have small, easy-to-understand interfaces for our classes. Dozens of public methods on my classes is another indication that they’re doing waaaay too much.

Abstracting-away the database is notoriously hard. And I believe that ActiveRecord doesn’t do a particularly good job of this.

As noted before, ActiveRecord::Base pollutes your public interface with a plethora of storage-related methods. This makes it very easy for a developer to make the mistake of using one of these very storage-specific methods (i.e column_for_attribute) inside a controller action, for example.

Even using reload or update_attribute indicate that the using code knows a little too much about the underlying persistence layer.

“Unit” testing

If there was one thing I knew about Rails before having written a single line of ruby code, it was that everything in Rails is unit-tested. Rails promotes good testing practices. Hooray!

So, obviously, one of the first things I read about concerning RoR development, was how to test:

Testing support was woven into the Rails fabric from the beginning.

Right on! Finally, somebody gets it right!

Just about every Rails application interacts heavily with a database and, as a result, your tests will need a database to interact with as well. To write efficient tests, you’ll need to understand how to set up this database and populate it with sample data.

Say what?I must have misread this.. let me check again..

your tests will need a database

Yup. I need a bloody database to test my business logic.

So why is this so bad?

The meaning and intent behind unit tests is to test single units of code.
That means that if the test fails, there can only be one reason for it- the unit under test is broken.
This is why you fake everything external that the unit under test interacts with; you don’t want a bug in a dependency to cause your current unit test to fail.

For example, when testing the Payroll class’s total_salaries method, you use fake Employee objects, with a fake salary property, which would return a predefined value.
That way, if we get a wrong total_salaries value, we’ll know for sure that the problem lies within the Payroll class, and nowhere else.

But, with rails testing, you’re not encouraged to fake anything.
That way, if the total_salaries test fails, it can be because Employee is broken, or my database schema is wrong, or my database server is down, or even something as obscure as a child object of Employee has some required attribute missing, so it can’t be persisted, and an error is thrown.

This is not how a unit test is supposed to work.

Not only does Rails encourage you to write non-unit unit tests, it also makes it nearly impossible, and very dangerous to go around it and write proper unit tests.(* Note that if you use hooks, such as before_update etc., it becomes even more horrible)

Apart from the horribleness of making it harder for me to determine what went wrong when a test fails, this complete abomination of a testing strategy caused me some more hair-tearing moments:

Our unit test suite took 18 minutes to run. Even when using a local, in-memory database.

My tests failed because a mail template was defined incorrectly.
Since sending an email was invoked in a before_create callback, failing to send an email caused the callback to fail, which caused create to fail, which meant that the record was not persisted, which meant that my test was fucked.

Too much magic

Magic is something that is inherent to any framework; by definition, if it does some sort of heavy lifting for you, some of it is going to be hidden from your view.

That’s doubly true for a framework which prefers “convention over configuration”,
meaning- if you name your class / method the correct way, it’s going to be “magically” wired up for you.

This kind of magic is fine and acceptable. The magic that I have a problem with is rails’ extensive usage of hooks (aka callbacks); Be it in the controller (before / after action), or in the model (before / after create / update / delete…).

Using callbacks immediately makes your code harder to read:
With regular methods, it’s easy to determine when your code is being executed – you can see the method being called right there in your code.
With callbacks, it’s not obvious when your code is being invoked, and by whom.

I’ve had several instances of scratching my head, trying to figure out why a certain instance variable was initialized for one controller action and not for another, only to track it down to a problem with a before_action callback of a parent class.
The fact that ActiveRecord callbacks can’t be turned off is also a pain in the ass when testing, as I described previously.

Additionally, callbacks are, of course, very hard to test, since their logic is so tightly coupled to other things in the model, and you can’t trigger them in isolation, but only as a side effect of another action.

Conclusion

I like the idea behind Rails. Convention over configuration is great, and I also totally subscribe to the notion that application developers should write more business-specific code, and less infrastructure code without any business value.

The problem with Rails isn’t that it’s an opinionated framework.
The problem is that its opinions are wrong:

Tying you down to a single, err, “controversial” persistence mechanism is wrong.

Making it impossible to do proper unit testing is wrong.

Encouraging you to do things as side-effects rather than explicitly is wrong.

When it first launched, Rails was revolutionary: it was the first to offer such comprehensive guidelines, and support, to create your application in a standard way.
However, it seems that today, our standards of what is ‘correct’ or ‘recommended’ have changed, while Rails has stubbornly remained where it was 10 years ago.

You can still create a good application using Rails.
It’s just that it doesn’t allow you to create a great application.