Wednesday, January 30, 2013

Today I have been reading a lot about technical debt, and I
keep seeing terms like

“… at this point they choose to take on some technical
debt…”

I have never seen
a programmer make this choice.

Which is not to say I have never seen a programmer take on
technical debt.

Allow me to explain with another metaphor:

Weight.

I have never said, “I think I will eat this piece of cake to
gain some weight”

I have eaten cake. I have not watched what I ate. I have not
exercised afterward. As a result I have gained weight. But it was never an
intentional choice. It was just the side effects my actions. It was NOT
intentional.

Likewise, I have seen programmers:

Use bad names

Add lines to long methods

Add an additional If block

Add more methods to large classes

Comment out a section of code

Skip writing tests

Skip refactoring

Ignore extracting a common interface

But I have never seen
a programmer “take out a loan”.

Technical debt isn’t like a mortgage, a large decision you
make with a bunch of thought. It’s more
like a credit card or a bar tab. You just keep coding and coding, little by
little, and then one day you realize you have a large amount of debt.

(Side note: I have quite often seen people realize that they are in debt and then actively decide not to pay off that debt; You could argue that this is a form of "intention" but deciding to keep your debt is not even close to deciding to take on debt.)

On the flip side, I see quite a lot of intention by people who say fit and slim. They tend to
actively watch what they eat both quantity of food and type of food. They make
a habit of exercise. As a result they tend to think that everyone else does it to.
After all, you choose to eat that ice cream, right? Yes, but I didn’t choose to
get fatter, I actually didn’t consider my weight at all when I ate that ice
cream.

And that’s the point, when we analyze why people
choose technical debt, we tend to be missing that point that the vast majority
of people DON’T CHOOSE technical debt, it is a side effect of their actions,
but never part of the decision.

Monday, January 7, 2013

Yesterday, David Heinemeier Hansson wrote a blog: ‘Dependency injection is not a virtue’. There is a lot of things mixed together in this blog, all finally put together with the statement “I'm a Ruby programmer”. I have found this moves people into more of a sports team cheering mindset and does little to help keep conversation rationale and productive.

So let's take a step back, and start to unpack the meaning behind 'dependency injection'

Let’s skip the opinion part of which is preferable, and start by pointing out that both of these actually are examples of dependency injections, albeit via different implementations [Monkey patch vs. parameter passing].

So I wanted to break down the first misconception my experience has shown to be common in the programming community; The confusion between Dependency Injection and Dependency Injection Frameworks.

disclaimer: as each framework is unique, this is a fairly blanket statement I am about to make. It may not apply to your framework.

Dependency Injection (Concept)

Dependency Injection (DI) is a general concept which addresses polymorphism. The idea is that you will have different implementations for things, and you need some way of switching which implementation you use. The programming world needed a label to refer to that concept and chose Dependency Injection. There are many many forms of Dependency Injection, and they offer different pro’s & con’s for different scenarios. Off the top of my head, I can think of quite a few methods for Dependency Injection:

parameters

inheritance

mocks

byte code manipulation

Global Variables

Factory Pattern

Callbacks

Inversion of control (IoC)

Monkey Patching

Dependency Injection Frameworks

Reflection

There are many more and as programming evolves many more will be created.

Dependency Injection Framework (Post Compiler DI)

Dependency Injection Frameworks, on the other hand, have seemed to evolved to support a specific type of scenario. Namely

“How do I inject a polymorphic instances after compilation time?”

You might ask, why would I want to do this? There are many answers, the best being Plugins (You do not want to have to recompile your web browser to allow for a plugin to work). There are, of course, many more reasons.

With non-compiled languages, it get’s even weirder to think of “compile time” but the concept is still valid, especially if you are not at the top of the runtime stack.

Dependency Injection Frameworks usually achieve their DI via a combination of factories, reflection and some sort of runtime configuration setup (files and metadata are common).

and they can be a bit of a heavy handed solution if you are using them for DI when you aren’t really concerned about what happens after compilation time.

Was DHH talking about Dependency Injection Frameworks?

It is worth noting the DHH never actually mentions dependency injection frameworks. Although, there is definitely a lot of mention of them is the conversations afterwards. There is a lot more in this article that is worth talking about, but let’s save that for a different blog…