So here is today’s brief lesson, which is on dependency injection. Wikipedia may say it’s “a software design pattern that implements inversion of control for resolving dependencies”, but in fact it’s way simpler than that makes it sounds. It’s passing objects into a function instead of having it create them.

In design-patterns parlance, the stream that you want to output onto is a dependency; and rather than have the printValue() function know what stream to use, you gain flexibility by injecting it into the function.

That’s all it is.

Of course, in the world of Java, where everything is a noun, you see this much more often in classes than in functions, with the dependency being injected into the constructor, whence it becomes a member of the object. So instead of:

In the second version of the code, we inject the PrintStream dependency into the object when we create it, so that we then have a ValuePrinter that is configured to do its printing onto System.out; and we could make another ValuePrinter that is configured to do its printing onto System.err or whatever other PrintStream we wanted to use.

jdedge: a closer read might reveal that he did cover the why with the “you gain flexibility” bit. You could lobby for more detail, discussion about trade offs, etc. but it’s not missing.

A large part of why Design Patterns haven’t been as useful as predicted stems from the tendency to assume anything comprehensible is unsophisticated. Mike’s post is the first one about DI that I’ve seen where I thought I might share it with a new developer and not feel like I’m part of some elaborate hazing ritual.

First the rationale is that by passing the fields as parameters you can more easily select the implementation that you want instead of having it hard-coded into the class. But perhaps that’s what you meant by “increasing flexibility”.

Second, you will usually use a dependency injection (DI) framework to construct objects for you. So instead of using new and passing some parameters, you call your DI framework and ask him an instance of the class. You also tell the DI framework what are the default implementations it should use. This way, the framework will automatically supply the parameters to the constructor. And if, in turn, these parameters use DI, they will be constructed as well.

I also struggled to understand this recently. But I never used it actually. Maybe it’s because of the kind of work I’m doing (parsers and programming language research), but I just never seem to encounter the need for such a heavy machinery. I’d be delighted to hear about people who have situations that are really painful or verbose without dependency injection.

Because when you can inject components into your system under test, you can mock them (especially if they expect to take interfaces or objects of abstract type, or even objects with heaps of virtual methods — then you can leverage one of the myriad mocking frameworks out there instead of rolling your own for a test), so you don’t have to spend a whole bunch of time, effort and resources to spin up a test. Your tests are faster and test what the system under test is doing — not what other parts are doing.

A great example is injection of repositories into, say, MVC controllers. When you inject the repository, you don’t have to care if the actual repo can load or save items — you can just make a fake one which does. So now you’re testing what the controller does with the repo, not how the underlying repo behaves. You’re no having to spin up a full environment that an actual repo can use to serve up data or save it — now you just test how the controller interacts with a repo, if it were given one under “perfect” conditions (for the specific test).

I must admit that I don’t know the Java side of this well, but I’m quite sure it’s quite analogous to the C# side since I know a lot of the cool concepts in .net were “borrowed” from Java. The next step is having these objects injected in the constructor of your orchestrator classes via some smart dependency resolving framework (In .net, you could use Windsor or Autofac, for example).

The other advantage you get out of that is changing dependency resolution on the fly — something I didn’t have to actually do until recently:

I’m working on a system for a client which uses their own, existing authentication api. It’s a simple oauth-style thing. For testing, though, I don’t want to have to hit their oauth (because it’s slow) or rely on them to set up test users for me (because they are slow and I hate bothering people about as much as I hate waiting for them).
So I have a build target which swaps out the proper authentication mechanism for a totally faked one by dynamically loading a faked auth assembly.
Now I can test in my own environment and just “pretend” to be any user, with any rights I want to, after going through a login screen which lets me select the options I want. In addition, because the injection is done via dynamic load, I can ensure that the faked auth assemblies never reach the client (and so, can’t be loaded — so the system can only load the REAL auth assemblies), so I *can’t* deploy a version of the system with faked auth in place. Because I’m a little paranoid like that.

Davyd is right: once you understand DI, it becomes a very powerful tool – not just for unit testing, but also for creating larger systems, because it encourages the componentization of systems, using proper contracts.

I disagree with Nix though: while traits may have some overlap with DI, they are a compile-time feature. The power of DI comes when dependencies are formulated in terms of behavior, ie. pure abstract base classes (or interfaces in Java), allowing the injection of any concrete instance fulfilling the contract at link time resp. run time, without having to recompile the component itself.

I used Java over the summer after being a functional programmer (mostly OCaml) for a few years before that. From my perspective, a lot of the Java idioms (including dependency) seem to emulate the disciplines and benefits of functional programming. DI, in particular, seems to be a way to emulate referential transparency: instead of your function having hidden bits and pieces it creates on invocation (perhaps non-deterministically), you explicitly feed it everything it needs, under the assumption that given the same inputs, it has the same behavior.

Just to be clear: I’m not saying that the functional paradigm or terminology is better, just that’s the lense I see through. And yes, Java 2015 is so much nicer than Java mid-2000s.