Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly.

Benefits include:

faster compilation time

simpler deployment

Faster startup time for your program

Less assemblies to reference

Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements.

Developers are worried for a few reasons.

A. People like to work in their own environment so they dont step on eachothers toes.

B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc.

C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies.

My personal view:

I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common.

I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

It can quickly get hectic having too many namespaces bouncing about. I speak from experience. There tends to be a mentality to prefer creating a namespace over renaming an old namespace or finding a place to put new code, so if you're going to do it this way, you should agree to some naming convention. Namespace conventions like "com.company.project.section.(df|util|model)" and things will go smoothly. I'm afraid I can't help you convince them though. For that you're on your own.
–
NeilApr 20 '12 at 13:30

4 Answers
4

In the .NET ecosystem the existence of many assemblies is due to the belief that a cs/vb/fsproj file == assembly. Throw on top of that the practices MS has been pushing for years of having one cs/vb/fsproj for every 'layer' (I use that in quotes because they have pushed both layers and tiers at different times) and you end up with solutions that have dozens and even hundreds of project files, and ultimately assemblies, in them.

I'm a firm believer that Visual Studio, or any IDE, is not the place that you should be architecting the assembly output of your project. Call it Separation of Concerns if you will; I believe that writing code and assembling code are two different concerns. That leads me to the point where the projects/solutions that I see in my IDE should be organized and structured so as to best enable the development team to write code. My build script is the location where the architecture/development team(s) focus on how we are going to assemble the raw code into compiled artifacts and, ultimately, how we're going to deploy those to different physical locations. It's prudent to note that I absolutely do not use MSBuild and it's tight coupling to *proj file structures for my build scripts. MSBuild, and it's inherently tight coupling to the *proj files and their structures, doesn't allow us any flexibility in moving away from the problem of large project and assembly counts in our codebases. Instead I use other tools and feed the files needed directly to the compiler (csc.exe, vbc.exe, etc).

By taking this stand I'm able to have my development team focus on writing functionality without any thought to the assemblies that will be output. Someone, surely, will say "But that means that developers can put code into assemblies that it shouldn't be in." Just because the code would compile in the IDE doesn't mean that it would when running the build script...and ultimately, the build script is the one source of truth for how assemblies will be constructed. The thing is, to do this you'd probably have to alter the build script to pull that code into the unwanted location. A technique that I use to back this up is to create unit tests that both describe and verify the architecture and deployables on the project. If the classes in the UI layers should never reference those in the data access layer, then I write a test that enforces that. Those types of tests will need to be changed if we decide to change the deployables or architecture, but since they're also doubling as documentation on those topics we should be changing them to ensure that the documentation is up to date.

On the flip side of that argument is the fact that changing the assemblies and deployables becomes much easier and faster. Instead of having to move code and files from one *proj file to another, and incurring all the problems that many version control systems have with that task, all you have to do is rework the build script to source each assembly's content from the already existing physical locations. It's this capability that highlights the decoupling between how we structure our solution and *proj files and how we create our output assemblies. With this technique, we are able to adjust both the number and the contents of the assemblies that we create without ever having to adjust how the code is structured in the IDE. Not only do we have the flexibility to change our outputs regularly and easily, but when we do that the developers are not impacted by having to discover where files have moved to. The re-learning overhead of the changes is non-existent.

The one drawback that many people struggle with when looking at this type of solution is that you may no longer be able to just "Hit F5" to run the application. While I'd argue that you don't want to do that anyways, it has to be addressed. The solutions are available and they differ depending on the type of application that you're building. The one root similarity between them is that if you absolutely must step through the code to debug/test it, learn how to Attach To Process.

To summarize, use the IDE for developing code. Use a build scripting solution that is not reliant on the defined solution and *proj structures to design and create the deployables that the application needs. Keep those two tasks separate and you'll see a lot less friction due to deployment level changes.

Interesting. But is that really necessary? Seems to me that this will add a lot friction for the build script management. Typically, one needs a coarse grained assembly structure and a finer grained logical structure. So why not just rely on *.proj files for assemblies, and use subfolders to make the logical structure visible in your IDE?
–
Doc BrownApr 20 '12 at 16:14

What I usually end up with is about 2-5 *proj files on a project (depends on a number of bigger architectural things like needing Windows Services, console batch programs, ect). Usually there is 1 main proj and in it I subfolder it at a logical level. So there would be a subfolder each for View, Dto, ViewModel (in a WPF app), Services, Repositories, etc. I use my build script to create the assemblies I need. If I want to start with UI.exe only, then I do. Later I can pull things off to create UI.exe and Backend.dll. That change doesn't affect the *proj file contents or structure.
–
Donald BelchamApr 20 '12 at 16:32

About the friction on build script mgmt, my projects rarely have any. I find that once the base build script has been completed (usually early in the project), I only make minor modifications to the script and then only every few months. One of the keys to doing this is structuring your subfolders (logical layers) well and keeping the code within them highly decoupled. If you do that, you can simply change the script to (pseudo code here): - include MyCo.MyApp.Service*.cs - include MyCo.MyApp.DAL*.cs It makes splitting out, or adding in, a logical layer from an assembly is quite easy.
–
Donald BelchamApr 20 '12 at 16:41

Its important to mention that Donald Belcham does "Brownfield Development" training on Pluralsight.com. His training is the foundation for my question. Thus, his answer gives me a lot of additional insight. Without watching the video to get the full details, it may be difficult to fully appreciate this answer. Just my two cents...
–
P.Brian.MackeyApr 20 '12 at 21:10

In principle, I support your desire for fewer assemblies. Having multiple assemblies needs technical justification, while having few assemblies has simpleness on its side and needs no other justification.

In general, logical separation is stronger and cleaner than physical separation. The boundary between assemblies is a very small technical hindrance for bug and code rot. The most powerful boundary is the social boundary ("that's Tom's library, I won't fuck around in there"), and namespaces provide the same safety.

Be aware, though, that there are technical reasons for separate assemblies, even though all good of them are on the deployment and consumer site.

Here is really good guide for what reasons you should split .NET programs into assemblies, and for what reasons you should not do this. Related to your question, the paper says that A should be handled by your SCCS, the rationale behind B may be visibility reasons you don't have to care for as long as you are not a framework vendor, and C may be handled by a tool like NDepend.

Of course, this is recommended by the manufacturers of NDepend, which is a commercial tool, and checking for cyclic dependencies between different logical layers in one assembly is only feasible with a tool like NDepend. So what they suggest makes mostly sense if you buy their tool (nevertheless most of the arguments in the document seem reasonable to me).

@CorbinMarch: I don't like to copy passages of the paper, but I have edited my answer to make the relation to the points of the OP's question more clearer.
–
Doc BrownApr 20 '12 at 15:08

Regarding A): from the link: "increase the contentions on sharing VS .sln, .csproj files. But as usual, the best practice is to keep these files checked-out just for the few minutes required to tweak project properties or add new empty sources files". I honestly don't think this is a good way of working, and goes against most DVCS workflows nowadays...
–
Daniel BNov 15 '12 at 7:03

In a project's infancy, it makes sense to separate modules into distinct assemblies because there will be a lot of flux and because of Visual Studio's project format, adding new files changes the project file in a way that makes merging difficult sometimes. Compare to rake or nant where you use wildcards to determine which files to include in the compilation.

For this reason, I prefer to start with separate physical assemblies and merge them together as the modules become more stable. Thus you get the best of both worlds...lower merge collisions at the start and better compilation/initial startup as you march toward release.