//if multiple implementations are possible, could use a factory here.
IMyInterface interface = new InterfaceImplementation();
var myClass = new MyClass(interface);

It may be I am missing a very important point, but I am failing to see what is gained. I am aware that using an IOC container I can easily handle an objects life cycle, which is a +1 but I don't think that is core to what IOC is about.

Both examples use dependency injection, one using a container, one in an ad-hoc DIY style. Try an example with a tree of 100 classes with several shared dependencies and you will find out why dependency injection containers (as for example the unity container) are useful.
–
PatrickJun 29 '13 at 8:59

@Patrick: "a tree of 100 classes with several shared dependencies"? Not sure, but sounds to me like a re-think may be in order? There is no need to put every class under the management of any form of DI. In any application there are many classes that are only used to factor out specific parts of a bigger class. Such helper classes do not feature in the helped class' public methods and properties (if they do they should be behind an interface anyway) and thus do not need to be available to other classes, directly or indirectly through an IoC.
–
Marjan VenemaJun 29 '13 at 9:56

marjan I liked the second article, it clears up a few aspects I may have been confused on. @Patrick thanks for clearing up that both are infact DI. It raises the question, should DI just be used at entry points, for example to interface with a DAL?
–
JamesJun 29 '13 at 11:03

5 Answers
5

In your example, you haven't gained anything. Your intuition is correct. If you have only one class implementing an interface, there is no gain, because it may always stay that way. There is no limit to the degree to which we can add interfaces and clutter the code.

Can you expand the example to show when the framework has benefits? Also when is DI best used, one thing many people say is developers overuse it.
–
JamesJun 29 '13 at 20:01

1

I have seen the negative side in many projects. My complaints specifically: no practical benefit (gold plating); added complexity; global mutable data (multi-threading problems); difficulty debugging and finding dependencies. Dependency injection is used in extensible editors, but often we use different terminology in such a project (plug ins). In many other types of software, I prefer the benefits of seeing dependencies in the stack, by passing dependent objects as parameters. Coupling is not necessarily bad, and adding an interface is not true decoupling.
–
Frank HilemanJul 1 '13 at 16:27

Continuing past my character limit: anything that is highly extensible, especially with third party components, such as an editor like Visual Studio, or a complex service framework (web server), will typically use something like dependency injection, though it may have a different name. Things that are small and tightly focused in purpose, probably have less use for DI.
–
Frank HilemanJul 1 '13 at 16:34

In this example, I would not call using a DI container "better practice". However, code bases tend to grow and become more complicated. Imagine we have a dependency graph like this:

A
/ \
/ \
B C
\ /
\ /
D

The code to construct this graph without a DI container would look something like this:

ID d = new D();
IB b = new B(d);
IC c = new C(d);
A a = new A(b, c);

Note that the same instance of D is used for for B and C. But what if we decide that they each need their own instance? Now we have to duplicate the code to create a new instance of D, and the result looks like this:

Nontrivial applications can easily have far more complicated dependency graphs than this, and the code's complexity will scale accordingly. Contrast the above with the equivalent code using a DI container:

var iocContainer = new UnityContainer();
iocContainer.Resolve<A>();

Note that the complexity does not scale with the number of dependencies. Even when you consider the container's configuration, that only scales with the number of unique dependencies (e.g. you'll only need one entry for D even if it ends up creating a hundred instances). In addition, some dependencies (like concrete classes) can be automatically resolved without any configuration at all.

DI containers also provide the flexibility to specify, in configuration, whether or not you want D to be a singleton (for example), in addition to plenty of other resolution, construction, and lifecycle options. These may also be more readable using your container's configuration syntax than implementing them yourself.

The more complex your dependency graph becomes, the more benefit you'll get from a DI container.

@Cynafish I disagree with this: "The more complex your dependency graph becomes, the more benefit you'll get from a DI container." DI does not remove dependencies, if you consider how dependencies are computed or defined in a dynamically typed language: a dependency is a call from one object to another. DI adds an interface, obscuring the dependency, and there is no benefit in that by itself.
–
Frank HilemanJul 5 '13 at 18:13

3

@Frank Hileman I didn't mean that there would be fewer dependencies, just that you need to write less code to instantiate those dependencies. Also I was comparing a DI container to "poor man's" DI, not a lack of DI; and DI containers don't require you to use interfaces (although IMO you usually should).
–
CyanfishJul 5 '13 at 21:47

OK, I see your point. Personally, I prefer to see up front (via call chain) where dependencies are, rather than digging through a configuration file, and learning an entire DI library.
–
Frank HilemanJul 5 '13 at 21:52

Personally, I'm a huge fan of code-as-configuration and fluent syntax (e.g. Ninject), and find it's not very hard to learn. I can understand how it might be problematic if you work on a lot of different projects with different containers, though.
–
CyanfishJul 5 '13 at 21:55

Passing in an implementation of the interface should be preferred. The real benefits come with more complex examples. What happens if the creation is determined by runtime behavior? Passing in a factory rather than the instance solves that, but can get unweildy. And what if the creation timeframe was not clear cut (the consuming class could not just call the factory, but would have to wait until you set everything up).

Further, what happens when the class you're instantiating isn't the concrete version either? If all you need is a IFoo some factory is providing the implementation. Some concrete implementations need some dependencies injected and others need others (or none).

IoC containers are a giant hammer. Usually just passing in the concrete implementation of an interface is clear, functional, and flexible. But sometimes the design grows complex enough that you need IoC containers to manage the tangle of dependencies spread throughout your object graph.

The implementation or design pattern of DI is never as interesting as the motivation behind its use or abuse. You got to the meat of the matter with "What happens if..?" The problem is determining beforehand when this scenario will occur, without over generalizing a problem.
–
Frank HilemanJul 1 '13 at 16:51

Using DI container as a service locator is in fact an anti-pattern, it is widely discussed. Sometimes you have to do this but generally speaking you should avoid this. Basically by calling container.Resolve you are not doing any IoC and IoC is what DI containers intent to do.

What we really use DI containers for is to inject dependencies when needed using custructor parameters or property injection. You register your classes that implement your interfaces well in advance, when your application initialises/starts and there you specify all the details - simple initialization, using parameters, using factories and so on, specifying the life scope, etc, etc. When all this is done you don't really need to care as long as your constructors have parameters, where your registered classes can be injected to. So when you do this

You of course have to have the bootstrapper that resolves your main application class but since this class would already have injected parameters, all the rest can go automatically. And if you need ISession or ICustomerRepository or IWhatEverElseYouNeed, you will get it without worrying to instantiate it every time you need it.

The question is, where is the knowledge about which concrete implementation of a dependency is needed, and the setup (e.g. configuration or other dependencies) this dependency needs is best kept. This may or may not be in the dependant object. As always, the answer depents. But consider a complex object which needs other dependencies and maybe configuration data like urls or passwords. This dependency may also be the dependency of other objects. In this case, the knowledge of the setup should clearly not be located in the dependant objects, because it would be redundant and/or clutter class relationships and thus create more overall complexity than with a DI-Container, which keeps this knowledge in one place.

You could of course implement a factory for this case. But what if your application has lots of complex depenendcies, which all need some kind of factory. Now you have the problem of factories floating around, all containting part of the knowledge how the dependencies of the application are interconnected. Using a container in this case, adds much to the clarity and transparancy of the system. DI-Containers at this level are often used as a factory of factories.

The more complex the application gets, the more it is useful to have a central place interconnecting the parts.