Since learning (and loving) automated testing I have found myself using the dependency injection pattern in almost every project. Is it always appropriate to use this pattern when working with automated testing? Are there any situations were you should avoid using dependency injection?

5 Answers
5

Basically, dependency injection makes some (usually but not always valid) assumptions about the nature of your objects. If those are wrong, DI may not be the best solution:

First, most basically, DI assumes that tight coupling of object implementations is ALWAYS bad. This is the essence of the Dependency Inversion Principle: "a dependency should never be made upon a concretion; only upon an abstraction".

This closes the dependent object to change based on a change to the concrete implementation; a class depending upon ConsoleWriter specifically will need to change if output needs to go to a file instead, but if the class were dependent only on an IWriter exposing a Write() method, we can replace the ConsoleWriter currently being used with a FileWriter and our dependent class wouldn't know the difference (Liskhov Substitution Principle).

However, a design can NEVER be closed to all types of change; if the design of the IWriter interface itself changes, to add a parameter to Write(), an extra code object (the IWriter interface) must now be changed, on top of the implementation object/method and its usage(s). If changes in the actual interface are more likely than changes to the implementation of said interface, loose coupling (and DI-ing loosely-coupled dependencies) can cause more problems than it solves.

Second, and corollary, DI assumes that the dependent class is NEVER a good place to create a dependency. This goes to the Single Responsibility Principle; if you have code which creates a dependency and also uses it, then there are two reasons the dependent class may have to change (a change to the usage OR the implementation), violating SRP.

However, again, adding layers of indirection for DI can be a solution to a problem that doesn't exist; if it is logical to encapsulate logic in a dependency, but that logic is the only such implementation of a dependency, then it is more painful to code the loosely-coupled resolution of the dependency (injection, service location, factory) than it would be to just use new and forget about it.

Lastly, DI by its nature centralizes knowledge of all dependencies AND their implementations. This increases the number of references that the assembly which performs the injection must have, and in most cases does NOT reduce the number of references required by actual dependent classes' assemblies.

SOMETHING, SOMEWHERE, must have knowledge of the dependent, the dependency interface, and the dependency implementation in order to "connect the dots" and satisfy that dependency. DI tends to place all that knowledge at a very high level, either in an IoC container, or in the code that creates "main" objects such as the main form or Controller which must hydrate (or provide factory methods for) the dependencies. This can put a lot of necessarily tightly-coupled code and a lot of assembly references at high levels of your app, which only needs this knowledge in order to "hide" it from the actual dependent classes (which from a very basic perspective is the best place to have this knowledge; where it's used).

It also normally doesn't remove said references from lower down in code; a dependent must still reference the library containing the interface for its dependency, which is in one of three places:

all in a single "Interfaces" assembly that becomes very application-centric,

each one alongside the primary implementation(s), removing the advantage of not having to recompile dependents when dependencies change, or

one or two apiece in highly-cohesive assemblies, which bloats the assembly count, dramatically increases "full build" times and decreases application performance.

All of this, again to solve a problem in places where there may be none.

the dependency injection specifically help where you need to get class A from class B from class D etc. in that case ( and only that case) the DI framework may generate less assembly bloat than poor's man injection. also I never had a bottleneck caused by DI.. think to maintenance cost, never to cpu cost because code can always be optimized , but doing that without a reason has a cost
–
DarioOODec 1 '14 at 23:05

Outside of dependency-injection frameworks, dependency injection (via constructor injection or setter injection) is very nearly a zero-sum game: you decrease the coupling between object A and it's dependency B, but now any object that needs an instance of A must now also construct object B.

You've slightly reduced the coupling between A and B, but reduced A's encapsulation, and increased coupling between A and any class that must construct an instance of A, by coupling them to A's dependencies as well.

So dependency injection (without a framework) is about equally harmful as it is helpful.

The extra cost is often easily justifiable, however: if the client code knows more about how to construct the dependency than the object itself does, then dependency injection really does reduce coupling; for example, a Scanner doesn't know much about how to obtain or construct an input stream to parse input from, or what source the client code wants to parse input from, so constructor injection of an input stream is the obvious solution.

Testing is another justification, in order to be able to use mock dependencies. That should mean adding an extra constructor used for testing only that allows dependencies to be injected: if you instead change your constructors to always require dependencies to be injected, suddenly, you have to know about your dependencies' dependencies' dependencies in order to construct your direct dependencies, and you can't get any work done.

It can be helpful, but you should definitely ask yourself for each dependency, is the testing benefit worth the cost, and am I really going to want to mock this dependency while testing?

When a dependency-injection framework is added, and the construction of dependencies is delegated not to client code but instead to the framework, the cost/benefit analysis changes greatly.

In a dependency-injection framework, the burden of choosing the right dependencies typically returns to the programmer of the dependent class, by giving appropriate annotations to indicate where the dependency should be sourced from. The ability to circumvent those instructions when it's beneficial for testing is just gravy.

I don't see any reason to want to avoid dependency injection if using such a framework.

Just a quick comment about the dependency mocking in tests and "you have to know about your dependencies' dependencies' dependencies"... That's not true at all. One does not need to know or care about some concrete implementation's dependencies if a mock is being injected. One only needs to know about the direct dependencies of the class under test, which are satisfied by the mock(s).
–
Eric KingFeb 21 '12 at 21:36

2

You misunderstand, I'm not talking about when you inject a mock, I'm talking about the real code. Consider class A with dependency B, which in turn has dependency C, which in turn has dependency D. Without DI, A constructs B, B constructs C, C constructs D. With construction injection, to construct A, you must first construct B, to construct B you must first construct C, and to construct C you must first construct D. So class A now has to know about D, the dependency of a dependency of a dependency, in order to construct B. This leads to excessive coupling.
–
Theodore MurdockFeb 22 '12 at 16:33

There wouldn't be so much cost to having an extra constructor used for testing only that allows the dependencies to be injected. I'll try to revise what I said.
–
Theodore MurdockFeb 22 '12 at 16:37

Ok, I misunderstood your point then, as I agree with you. I thought you were talking about the testing process itself.
–
Eric KingFeb 22 '12 at 16:37

if you are creating database entities, you should rather have some factory class which you will inject instead to your controller,

if you need to create primitive objects like ints or longs. Also you should create "by hand" most of the standard library objects like dates, guids, etc.

if you would like to inject configuration strings it's probably better idea to inject some configuration objects (in general it is recommended to wrap simple types into meaningful objects: int temperatureInCelsiusDegrees -> CelciusDeegree temperature)

When you don't stand to gain anything by making your project maintainable and testable.

Seriously, I love IoC and DI in general, and I'd say that 98% of the time I will use that pattern without fail. It's especially important in a multi-user environment, where you code can be reused again and again by different team members and different projects, as it separates logic from implementation. Logging is a prime example of this, an ILog interface injected into a class is a thousand times more maintainable than simply plugging in your logging framework-du-jour, as you have no guarantee another project will use the same logging framework (if it uses one at all!).

However, there are times when it is not an applicable pattern. For example, functional entry points that are implemented in a static context by a non-overridable initialiser (WebMethods, I'm looking at you, but your Main() method in your Program class is another example) simply cannot have dependencies injected at initialisation time. I'd also go as far as to say that a prototype, or any throw-away investigative piece of code, is also a bad candidate; the benefits of DI are pretty much mid-to-long-term benefits (testability and maintainability), if you are certain that you will throw away the majority of a piece of code within a week or so I would say you gain nothing by isolating dependencies, just spend the time you'd normally spend testing and isolating dependencies getting the code working.

All in all, it's sensible to take a pragmatic approach to any methodology or pattern, as nothing is applicable 100% of the time.

One thing to note is your comment about automated testing: my definition of this is automated functional tests, for example scripted selenium tests if you are in a web context. These are generally completely black-box tests, with no need to know about the inner workings of the code. If you were referring to Unit- or Integration-tests I'd say that the DI pattern is almost always applicable to any project that heavily relies on that kind of white-box testing, since, for example, it allows you to test things like methods that touch the DB without any need for a DB to be present.

An alternative to Dependancy Injection is using a Service Locator. A Service Locator is easier to understand, debug, and makes constructing an object simpler especially if you aren't using a DI framework. Service Locators are a good pattern for managing external static dependancies, for instance a database that you would otherwise have to pass into every object in your data access layer.

When refactoring legacy code, it is often easier to refactor to a Service Locator than to Dependancy Injection. All you do is replace instantiations with service lookups and then fake out the service in your unit test.

However, there are some downsides to the Service Locator. Knowing the depandancies of a class is more difficult because the dependancies are hidden in the class's implementation, not in constructors or setters. And creating two objects that rely on different implementations of the same service is difficult or impossible.

TLDR: If your class has static dependancies or you are refactoring legacy code, a Service Locator is arguably better than DI.

When it comes to static dependencies, I'd rather see facades slapped on them that implement interfaces. These can then be dependency injected, and they fall on the static grenade for you.
–
Erik DietrichFeb 21 '12 at 0:26

@Kyralessa I agree, a Service Locator has many downsides and DI is almost always preferable. However, I believe there are a couple exceptions to the rule, as with all programming principles.
–
Garrett HallFeb 21 '12 at 14:13

Service location's main accepted use is within a Strategy pattern, where a "strategy picker" class is given enough logic to find the strategy to use, and either hand it back or pass a call through to it. Even in this case, the strategy picker can be a facade for a factory method provided by an IoC container, which is given the same logic; the reason you'd break it out is to put logic where it belongs and not where it's most hidden.
–
KeithSFeb 21 '12 at 19:18