james mckay dot nethttps://jamesmckay.net
because there are few things that are less logical than business logicSat, 19 Jan 2019 15:00:34 +0000en-GBhourly1https://wordpress.org/?v=5.0.3138128114Which .NET IOC containers pass Microsoft’s tests?https://jamesmckay.net/2018/10/which-net-ioc-containers-pass-microsofts-tests/
https://jamesmckay.net/2018/10/which-net-ioc-containers-pass-microsofts-tests/#respondMon, 29 Oct 2018 10:00:00 +0000https://jamesmckay.net/?p=4331Since my last post on the state of IOC containers in .NET Core, I’ve ended up going down a bit of a rabbit hole with this particular topic. It occurred to me that since Microsoft has come up with a standard set of abstractions, it is probably best, when choosing a container, to pick one that conforms to these abstractions. After all, That Is How Microsoft Wants You To Do It.

But if you want to do that, what are your options? Which containers conform to Microsoft’s specifications? I decided to spend an evening researching this to see if I could find out.

Rather helpfully, there’s a fairly comprehensive list of IOC containers and similar beasties maintained by Daniel Palme, a .NET consultant from Germany, who regularly tests the various options for performance. He currently has thirty-five of them on his list. With this in mind, it was just an evening’s work to go down the list and see where they all stand.

I looked for two things from each container. First of all, it needs to either implement the Microsoft abstractions directly, or else provide an adapter package on NuGet that does. Secondly, it needs to pass the specification tests in the Microsoft.Extensions.DependencyInjection.Specification package.

The contenders

In the end of the day, I was able to find adapters on NuGet for twelve of the containers on Daniel’s list. Seven of them passed all seventy-three test cases; five failed between one and four of them. They were as follows:

Installing Autofac.Extensions.DependencyInjection installed Autofac 4.2.0, which is not the latest version. To install the latest version, you need to add it explicitly through NuGet. Having said that, Autofac 4.8.1 also passes all the tests.

One of the Unity tests flickered a couple of times, but on most of the test runs that I carried out, they all passed.

Which tests failed?

It’s instructive to see which tests failed. All but one of the failing tests failed for more than one container.

ResolvesMixedOpenClosedGenericsAsEnumerable. This requires that when you register an open generic type (for example, with svc.AddSingleton(typeof(IRepository<>), typeof(Repository<>))) and a closed generic type (for example, IRepository<User>), a request for IEnumerable<IRepository<User>> should return both, and not just one. Grace, Lamar and StructureMap all failed this test.

TypeActivatorWorksWithCtorWithOptionalArgs_WithStructDefaults. Microsoft’s specification requires IOC containers to choose the constructor with the greatest number of parameters that it can successfully resolve. When some of these parameters are optional, the algorithm should still work even if the optional parameters are value types. Grace fails both of these test cases.

LastServiceReplacesPreviousServices tests that when you register the same service multiple times and request a single instance (as opposed to a collection), the last registration takes precedence over the previous registrations. LightInject fails this test.

DisposingScopeDisposesService checks that when a container is disposed, all the services that it is tracking are also disposed. Maestro fails this test — most likely for transient lifecycles, because different containers have different ideas here about what a transient lifecycle is supposed to mean with respect to this criterion.

These failing tests aren’t all that surprising. They generally concern more complex and esoteric aspects of IOC container functionality, where different containers have historically had different ideas about what the correct behaviour should be. They are also likely to be especially difficult for existing containers to implement in a backwards-compatible manner.

Nevertheless, these are still tests that are specified by Microsoft’s standards, and furthermore, they may cause memory leaks or incorrect behaviour if ASP.NET MVC or third party libraries incorrectly assume that your container passes them. This being the case, if you choose one of these containers, make sure you are aware of these failing tests, and consider carefully whether they are ones that are likely to cause problems for you.

The most surprising result here was Lamar. Lamar is the succesor to StructureMap, which is now riding off into the sunset. It was also written by Jeremy Miller, who has said that two of his design goals were to be fully compliant with Microsoft’s specification from the word go, while at the same time having a clean reboot to get rid of a whole lot of legacy baggage that StructureMap had accumulated over the years and that he was sick of supporting. It is also the only container in the list that supports the DI abstractions in the core assembly; the others all rely on additional assemblies with varying amounts of extra complexity. However, the two failing tests in Lamar were exactly the same as the failing tests in StructureMap, so clearly there has been enough code re-use going on to make things difficult. Furthermore, the tests in question represent fairly obscure and low-impact use cases that are unlikely to be a factor in most codebases.

The no-shows

Most of the IOC containers on Daniel’s list for which I couldn’t find adapters are either fairly obscure ones (e.g. Cauldron, FFastInjector, HaveBox, Munq), dead (e.g. MEF), or not actually general purpose IOC containers at all (e.g. Caliburn Micro). There were, however one or two glaring omissions.

Probably the most prominent one was Ninject. Ninject was the first IOC container I ever used, when I was first learning about dependency injection about ten years ago, and it is one of the most popular containers in the .NET community. Yet try as I might, I simply have not been able to find a Ninject adapter for the .NET Core abstractions anywhere. If anyone knows of one, please leave a note in the comments below and I’ll update this post accordingly.

Having said that, it isn’t all that surprising, because Ninject does have some rather odd design decisions that might prove to be a stumbling block to implementing Microsoft’s specifications. For example, it eschews nested scopes in favour of tracking lifecycles by watching for objects to be garbage collected. Yes, seriously.

Another popular container that doesn’t have an adapter is Simple Injector. This is hardly surprising, though, because Simple Injector has many design principles that are simply not compatible with Microsoft’s abstraction layer. The Simple Injector authors recommend that their users should leave Microsoft’s built in IOC container to handle framework code, and use SimpleInjector as a separate container for their own application code. If SimpleInjector is your personal choice here, this is probably a good approach to consider.

Finally, there doesn’t seem to be an adapter for TinyIOC, which is not on Daniel’s list. However, since TinyIOC is primarily intended to be embedded in NuGet packages rather than being used as a standalone container, this is not really surprising either.

Some final observations

I would personally recommend — and certainly, this is likely to be my practice going forward — choosing one of the containers that implements the Microsoft abstractions, and using those abstractions to configure your container as far as it is sensible to do so. Besides making it relatively easy to swap out your container for another if need be (not that you should plan to do so), the Microsoft abstractions introduce a standard vocabulary and a standard set of assumptions to use when talking about dependency injection in .NET projects.

However, I would strongly recommend against restricting yourself to the Microsoft abstractions like glue. Most IOC containers offer significant added value, such as convention-based registration, lazy injection (Func<T> or Lazy<T>), interception, custom lifecycles, or more advanced forms of generic resolution. By all means make full use of these whenever it makes sense to do so.

For anyone who wants to tinker with the tests (or alert me to containers that I may have missed), the code is on GitHub.

]]>https://jamesmckay.net/2018/10/which-net-ioc-containers-pass-microsofts-tests/feed/04331The state of IOC containers in ASP.NET Corehttps://jamesmckay.net/2018/10/the-state-of-ioc-containers-in-asp-net-core/
https://jamesmckay.net/2018/10/the-state-of-ioc-containers-in-asp-net-core/#respondMon, 15 Oct 2018 09:00:58 +0000https://jamesmckay.net/?p=4297One of the first things that I had to do at my new job was to research the IOC container landscape for ASP.NET Core. Up to now we’ve been using the built-in container, but it’s turned out to be pretty limited in what it can do, so I’ve spent some time looking into the alternatives.

There is no shortage of IOC containers in the .NET world, some of them with a history stretching as far back as 2004. But with the arrival of .NET Core, Microsoft has now made dependency injection a core competency baked right into the heart of the framework, with an official abstraction layer to allow you to slide in whichever one you prefer.

This is good news for application developers. It is even better news for developers of libraries and NuGet packages, as they can now plug straight into whatever container their consumer uses, and no longer have to either do dependency injection by hand or to include their own copies of TinyIOC. But for developers of existing containers, it has caused a lot of headaches. And this means that not all IOC containers are created equal.

Conforming Containers in .NET Core

Originally, the .NET framework provided just a simple abstraction layer for IOC containers to implement: the IServiceProvider interface. This consisted of a single method, GetService(Type t). As such, all an IOC container was expected to do was to return a specific service type, and let the consumer do with it what it liked.

But there’s a whole lot more to dependency injection than just returning a service that you’re asked for. IOC containers also have to register the types to be resolved, and then — if required to do so — to manage their lifecycles, calling .Dispose() on any IDisposable instances at the appropriate time. When you add in the possibility of nested scopes and custom lifecycles, it quickly becomes clear that there’s much more to it than just resolving services.

And herein lies the problem. For with the introduction of Microsoft.Extensions.DependencyInjection and its abstractions, Microsoft now expects containers to provide a common interface to handle registration and lifecycle management as well.

When you register multiple services for a given type, when you request one, the one that you get back has to be the last one registered.

When you request all of them, they have to be returned in the order that they were registered.

When a container is disposed, it has to dispose services in the reverse order to that in which they were created.

There are also rules around which constructor to choose, registration of open generics, requesting types that haven’t been registered, resolving types lazily (Func<TService> or Lazy<TService>) and a whole lot more.

There are two points worth noting here. First, conforming containers MUST pass these tests otherwise they will break ASP.NET Core or third party libraries. Secondly, some of these requirements simply cannot be catered for in an abstraction layer around your IOC container of choice. If a container disposes services in the wrong order, for example, there is nothing you can do about it. Cases such as these require fundamental and often complex changes to how your container works that in some cases might be breaking changes.

For what it’s worth, this is a salutary lesson for anyone who believes that they can make their data access layer swappable simply by wrapping it in an IRepository<T> and multiple sets of models. Data access layers are far more complicated than IOC containers, and the differences between containers are small change compared to what you’ll need to cater for if you want to swap out your DAL. As for making entire frameworks swappable, I’m sorry Uncle Bob, but you’re simply living in la-la land there.

All containers are equal, but some are more equal than others

So should we just stick with the default container? While many developers will, that is not Microsoft’s intention. The built in container was explicitly made as simple as possible and is severely lacking in useful features. It can not resolve unregistered concrete instances, for example. Nor does it implicitly register Func<T> or Lazy<T> (though the latter can be explicitly registered as an open generic). Nor does it have any form of validation or convention-based registration. It is quite clear that they want us to swap it out for an alternative implementation of our choice.

However, this is easier said than done. Not all IOC containers have managed to produce an adapter that conforms to Microsoft’s specifications. Those that have, have experienced a lot of pain in doing so, and in some cases have said that there will be behavioral differences that won’t be resolved.

There are also concerns that third party library developers might only test against the default implementation and that subtle differences between containers, which are not covered by the specification, may end up causing problems. Additionally, there is a concern that by mandating a standard set of functionality that all containers MUST implement, Microsoft might be stifling innovation, by making it hard (or even impossible) to implement features that nobody else had thought of yet.

But whether we like it or not, that is what Microsoft has decided, and that is what ASP.NET Core expects.

Build a better container?

So what is one to do? While these issues are certainly a massive headache for authors of existing IOC containers, it remains to be seen whether they are an issue for authors of new containers, written from scratch to implement the Microsoft specification from the ground up.

This is the option adopted by Jeremy Miller, the author of StructureMap. He recently released a new IOC container called Lamar, which, while it offers a similar API to StructureMap’s, has been rebuilt under the covers from the ground up, with the explicit goal of conforming to Microsoft’s specification out of the box.

Undoubtedly, there will be other new .NET IOC containers coming on the scene that adopt a similar approach. In fact, I think this is probably a good way forward, because it will allow for a second generation of containers that have learned the lessons of the past fifteen years and are less encumbered with cruft from the past.

Whether or not the concerns expressed by authors of existing containers will also prove to be a problem for authors of new containers remains to be seen. I personally think that in these cases, the concerns may be somewhat overblown, but whether or not that turns out to be the case remains to be seen. It will be interesting to see what comes out in the wash.

It’s admittedly not something to which I’ve given much thought. I’ve always had a lot of respect for Uncle Bob and his crusade for greater standards of professionalism and craftsmanship in software development. I have two of his books — Clean Code and The Clean Coder — and I heartily recommend them to software professionals everywhere.

But I hadn’t given much thought to what he says about architecture in particular, so I thought I’d check it out.

He has written a whole book about the subject. I haven’t read it in its entirety yet, but he also wrote a short summary in a blog post back in 2012. He illustrates it with this diagram:

It’s basically a different way of layering your application — one that rethinks what goes where. That’s fair enough. One of the things that a clean architecture needs to deliver is clear, unambiguous guidelines about what exactly goes where. A lot of confusion in many codebases arises from a lack of clarity on this one.

But the other thing that a clean architecture needs to deliver is a clearout of clutter and unnecessary complexity. It should not encourage us to build superfluous anaemic layers into our projects, nor to wrap complex, unreliable and time-wasting abstractions around things that do not need to be abstracted. My question is, how well does Uncle Bob’s Clean Architecture address this requirement?

Separation of concerns or speculative generality?

Uncle Bob points out that the objective at stake is separation of concerns:

Though these architectures all vary somewhat in their details, they are
very similar. They all have the same objective, which is the separation
of concerns. They all achieve this separation by dividing the software
into layers. Each has at least one layer for business rules, and
another for interfaces.

Now separation of concerns is important. Nobody likes working with spaghetti code that tangles up C#, HTML, JavaScript, CSS, SQL injection bugs, and DNA sequencing in a single thousand-line function. I’m not advocating that by any means.

But it’s important to remember that separation of concerns is a means to an end and not an end in itself. Separation of concerns is only a useful practice when it addresses requirements that we are actually facing in reality. Making your code easy to read and follow is one such requirement. Making it testable is another. When separation of concerns becomes detached from meeting actual business requirements and becomes self-serving, it degenerates into speculative generality. And here be dragons.

The classic example of speculative generality is the idea that you “might” want to swap out your database — or any other complex and fundamental part of your system — for some other unknown mystery alternative. This line of thinking is very, very common in enterprise software development and it has done immense damage to almost every codebase I’ve ever worked on. Time and time again I’ve encountered multiple sets of identical models, one for the Entity Framework queries, one for the (frequently anaemic) business layer, and one for the controllers, mapped one onto another in a grotesque violation of DRY that serves no purpose whatsoever but has only got in the way, made things hard, and crucified performance. Furthermore, this requirement is seldom needed, and on the rare occasions when it is, it turns out that the abstractions built to facilitate it were ineffective, insufficient, and incorrect. All abstractions are leaky, and no abstractions are more leaky than ones built against a single implementation.

You really don’t want to be subjecting your codebase to that kind of clutter. It is the complete antithesis of every reasonable concept of “clean” imaginable. In any case, if it’s not a requirement that your clients are actually asking for, and are willing to pay extra for, it is stealing from the business. It is no different from taking your car to the garage with nothing more than faulty spark plugs and being told you need a completely new engine.

Ground Control to Uncle Bob

So how does Uncle Bob’s Clean Architecture stack up in this respect? It becomes fairly clear when he lists its benefits.

1. Independent of Frameworks. The architecture does not depend on the existence of some library of feature laden software. This allows you to use such frameworks as tools, rather than having to cram your system into their limited constraints.

2. Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.

3. Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.

4. Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.

5. Independent of any external agency. In fact your business rules simply don’t know anything at all about the outside world.

Points two and three are good ones. These are points that true separation of concerns really does need to address. We need to be able to test our software, and if we can test our business rules independently of the database, so much the better — though it should be borne in mind that this isn’t always possible. Similarly, just about every application needs to support multiple front ends these days: a web-based UI, a console application, a public API, and a smattering of mobile apps.

But points 1, 4 and 5 are the exact problem that I’m talking about here. They refer to complex, fundamental, deeply ingrained parts of your system, and the idea that you might want to replace them is nothing more than speculation.

Point 1 actually makes two mutually contradictory statements. A “library of feature laden software” is the exact polar opposite of “having to cram your system into their limited constraints.” In fact, if anything is “cramming your system into their limited constraints,” it is attempting to reduce your system to support the lowest common denominator between all the different frameworks.

When I get to point 5, I have to throw my hands up in the air and ask, what on earth is he even talking about here?! Business rules are, by their very definition, all about the outside world! Or is he trying to tell us that we need to abstract away tax codes, logistics, Brexit, and even the laws of physics themselves?

When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there’s a general pattern: sending files. That’s one level of abstraction already. Then they go up one more level: people send files, but web browsers also “send” requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It’s the same thing again! Those are all sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it’s getting really vague and nobody really knows what they’re talking about any more. Blah.

When you go too far up, abstraction-wise, you run out of oxygen. Sometimes smart thinkers just don’t know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don’t actually mean anything at all.

These are the people I call Architecture Astronauts…

I’m sorry, but if making your business logic independent of the outside world isn’t architecture astronaut territory, then I don’t know what is.

Clean means less clutter, not more

There are other problems too. At the start, he says this:

Each has at least one layer for business rules, and another for interfaces.

This will just encourage people to implement Interface/Implementation Pairs in the worst possible way: with your interfaces in one assembly and their sole implementations in another. While there may be valid reasons to do this (in particular, if you are designing some kind of plugin architecture), it shouldn’t be the norm. Besides making you jump around all over the place in your solution, it makes it hard to use the convention-based registration features provided by many IOC containers.

Then later on he speaks about what data should cross the boundaries between the layers. Here, he says this:

Typically the data that crosses the boundaries is simple data
structures. You can use basic structs or simple Data Transfer objects
if you like. Or the data can simply be arguments in function calls. Or
you can pack it into a hashmap, or construct it into an object. The
important thing is that isolated, simple, data structures are passed
across the boundaries. We don’t want to cheat and pass Entities or Database rows. We don’t want the data structures to have any kind of dependency that violates The Dependency Rule.

This is horrible, horrible advice. It leads to the practice of having multiple sets of identical models for no reason whatsoever clogging up your code. Don’t do that. It’s far simpler to just pass your Entity Framework entities straight up to your controllers, and only transform things there if you have specific reasons to do so, such as security or a mismatch between what’s in the database and what needs to be displayed to the user. This does not affect testability because they are POCOs already. Don’t over-complicate things.

Of course, there may be things that I haven’t understood here. As I said, I haven’t read the book, only the blog post, and he no doubt mentions all sorts of caveats and nuances that need to be taken into account. But as they say, first impressions count, and when your first impressions include a sales pitch for layers of abstraction that experience tells me are unnecessary, over-complicated, and even outright absurd, it doesn’t exactly encourage me to read any further. One of the things that a clean architecture needs to deliver is the elimination of unnecessary and unwieldy layers of abstraction, and I’m not confident that that is what I’ll find.

]]>https://jamesmckay.net/2018/09/just-how-clean-is-uncle-bobs-clean-architecture/feed/04233Productivity suggestion: stop using the mousehttps://jamesmckay.net/2018/09/productivity-suggestion-stop-using-the-mouse/
https://jamesmckay.net/2018/09/productivity-suggestion-stop-using-the-mouse/#respondMon, 03 Sep 2018 09:00:32 +0000https://jamesmckay.net/?p=4187I could write a long, rambling blog post here with anecdotes and examples, but instead, I’ll just get straight to the point. If you want to see significant productivity gains, and avoid having repetitive strain injury destroying your programming career when you head into middle age, stop using the mouse. Mousing may be easy and intuitive, but it is slow, cumbersome, and it trashes your wrists.

I speak from experience there. When I first started experiencing wrist pain, I found that of all the things I tried — ergonomickeyboards, learning Colemak, what have you — by far the most effective step that I took was to cut down on my mouse usage and adopt a more keyboard-centric workflow. Today, about thirteen years after the first onset of discomfort, I’m almost entirely pain-free.

But even if you aren’t suffering wrist pain, mousing is still painfully inefficient and cumbersome for many tasks. Watching people thrashing around with the mouse, selecting text then faffing about with toolbars and popup menus is painful when you know that they could achieve pretty much the same thing far more quickly with judicious use of Ctrl-C and Ctrl-V.

Look for features of your software that let you accomplish things all the more quickly. For example, most modern text editors will let you quickly search for a file by name by typing a keystroke such as Ctrl-P, or a command by typing Ctrl-Shift-P.

Learn to use Spotlight on the Mac, or the search facility in the Windows start menu (press the Win key, then just type the name of the program or document you want to open).

Install Vimium on Chrome or Firefox. With this, you can press “f” to bring up shortcuts on each link or input box on a web page that you can type to jump to them.

Learn to use the command line. If you’re on Windows, git bash is your friend.

Once you get into the swing of things, you can then start considering other more advanced techniques, such as customising shortcuts in your most commonly used programs, or even learning to use a keyboard-centric editor such as emacs or vim.

Learning to go mouseless takes time and effort, and the chances are that you’re not going to be able to go cold turkey right from the start. But like learning a new language, it’s well worth the effort of learning a new shortcut every day. Your wrists will thank you for it, your boss will thank you for it, and your stakeholders will thank you for it.

]]>https://jamesmckay.net/2018/09/productivity-suggestion-stop-using-the-mouse/feed/04187First impressions of JetBrains Riderhttps://jamesmckay.net/2018/08/first-impressions-of-jetbrains-rider/
https://jamesmckay.net/2018/08/first-impressions-of-jetbrains-rider/#respondTue, 28 Aug 2018 09:00:13 +0000https://jamesmckay.net/?p=4159Up until recently, if you wanted to develop in .NET, your options for which IDE to use were pretty limited. Your choice was basically Visual Studio or … er, Visual Studio. Sure, there are one or two open source alternatives such as SharpDevelop, or you could use OmniSharp with a text editor, but these are pretty basic by comparison, and they tend not to see much use by anyone other than hobbyists.

Now there’s nothing wrong with Visual Studio per se. It’s a great IDE, with a ton of cool features, it does the job, and it does it well. But having just one high quality IDE to choose from contributed massively to the monoculture nature of .NET, with its pervasive insistence by many teams on being spoon-fed by Microsoft. Not surprisingly, many leading .NET developers have been clamouring for a decent, professional quality alternative over the years.

And what better company to deliver on that demand than JetBrains? As authors of not only the phenomenally popular Resharper but also IDEs for other platforms including IntelliJ IDEA, PyCharm, RubyMine and WebStorm, they were already most of the way there as it was. The absence of a fully-fledged .NET IDE to complete their line-up was puzzling, to say the least.

Well about a year ago, they finally delivered. And in the past couple of weeks or so I’ve been trying out their offering: Rider.

The first impression that I get of Rider is that it seems a lot more stable and less resource intensive than the combination of Visual Studio and Resharper. Although it has a different look and feel to Visual Studio, it brings you the full power of almost all of Resharper’s toolchain into a standalone editor that works, and works well. It comes in versions for Windows, Linux and OSX, giving you true cross-platform development. If you’ve ever wanted to do .NET development on Linux, now you have a way to do so.

Rider has some particularly nice touches. One thing I like about it is its built-in file comparison tool. As well as comparing two files against each other, or a locally checked out file against a version in source control, and as well as editing the differences, you get some handy buttons that let you copy chunks from one side to the other with a single mouse click. And it gets even better than that — thanks to its tight integration with the rest of the IDE, you get full code completion functionality, and even access to refactoring tools such as renaming methods or organising usings from within the diff window. A feature such as this really comes into its own when dealing with copy-and-paste code.

Rider’s diff/merge window, complete with code completion tools

Having said that, it does have its quirks and gotchas that Visual Studio users need to be aware of. Being based on the same core as other JetBrains IDEs, it follows their workflows and mental models rather than Visual Studio’s. So, for example, clicking “Run” on the toolbar doesn’t attach the debugger; you have to click the “Debug” button next to it to do that. And unlike Visual Studio, it doesn’t warn you when you edit your source code while the debugger is attached, nor does it lock the files down into read-only mode. This can lead to some initially puzzling situations when you try stepping through some code only to find that it has lost track of all the local variables. But the differences aren’t extensive, and if you’ve used other JetBrains IDEs before, or even if you’ve just used something else as well as Visual Studio, it doesn’t take long to get up to speed with it. To make the transition easier, Rider allows you to use Visual Studio key bindings instead of the Resharper-based or IntelliJ-like options.

Although Rider will handle most Visual Studio solutions just fine, there are a few corner cases that it struggles with. It didn’t work well with one of our products at work that includes a number of WCF services, and a colleague who also tried it out six months ago said he ran into problems with some older WebForms-based code. Its Docker support is also less mature than Visual Studio’s. But it’s improving all the time, and no doubt these problems will be resolved sooner or later.

Is it worth switching to Rider? Certainly some people will benefit from it more than others. I think the people most likely to get value out of Rider are polylot programmers who have a subscription to the entire suite of JetBrains desktop tools, and who will benefit greatly from having a common set of IDEs across multiple languages. Small businesses with more than five developers (which thus exceed the licensing limits for Visual Studio Community) will also benefit because Rider is considerably cheaper than a subscription to Visual Studio Professional. And Linux users now have an option for a high-end, professional quality IDE that targets the .NET ecosystem. But .NET traditionalists probably won’t touch it with a barge pole, and some legacy projects may experience a certain amount of friction.

But it’s well worth considering nonetheless. And whether you adopt it or not, Rider brings some much needed diversity to the landscape of high-end .NET IDEs. In so doing, it goes a long way towards breaking down the suffocating monoculture in many parts of the .NET ecosystem that insists on being spoon-fed by Microsoft. And that can only be a good thing.

]]>https://jamesmckay.net/2018/08/first-impressions-of-jetbrains-rider/feed/04159It’s not just an opinion, it’s scar tissuehttps://jamesmckay.net/2018/08/its-not-just-an-opinion-its-scar-tissue/
https://jamesmckay.net/2018/08/its-not-just-an-opinion-its-scar-tissue/#respondThu, 16 Aug 2018 09:00:55 +0000https://jamesmckay.net/?p=4146Software developers such as myself often have strong opinions about how code should be written. While some people may be tempted to dismiss these as “just an opinion,” the truth of the matter is that more often than not, these strong opinions are forged in the fires of Things Going Wrong And Having To Clear Up Afterwards.

The project that you have to thank for that is called Bills Knowledge Base.

Bills Knowledge Base, or BKB as it was affectionately known, was an internal web application in Parliament used to keep track of the progress of legislation. When I was brought onto the project in early 2009, it had all of a sudden stopped displaying any data. And I was asked to fix it. NOW.

It quickly became clear why this was the case. Someone had just deployed a new version and had missed out an important DLL. The reason why it wasn’t showing any data instead of crashing out with a stack trace or an error page was that it was riddled with Pokémon exception handling. All over the place. Put there by some code generation for which the templates had been thrown away.

Having deployed the missing DLL, I then turned my attention to the database.

It probably won’t surprise you when I tell you that it was a complete mess. Foreign key constraints were missing, leaving orphaned rows everywhere. Dates were stored in text fields in a whole array of mutually incompatible formats. Fields that were supposed to be required were blank. Enumeration fields contained unrecognisable mystery values. It was a miracle that the system actually ran at all, given the state it was in.

I did the only thing that one can do in such a situation. I rolled up my sleeves and set to work cleaning up the data.

It took me a month. One whole month.

I eventually managed to rip out the Pokémon exception handling, harden the system, and make it behave properly. That took even longer.

It’s now more than five years since I last worked on BKB. When I handed it over, it worked properly, it was robust, and the data had long since been licked into shape. I don’t know what development has been done on it since then, but it was still faithfully doing its job when I left the place earlier this year. So if you ever feel inclined to question what I have to say about exceptions, just head over to https://services.parliament.uk/bills/. Getting that little corner of the web to the place where it is today left me with some scar tissue. And it’s that scar tissue that makes me twitch whenever I see bad error handling code.

]]>https://jamesmckay.net/2018/08/its-not-just-an-opinion-its-scar-tissue/feed/04146An update on Lambda Toolshttps://jamesmckay.net/2018/07/an-update-on-lambda-tools/
https://jamesmckay.net/2018/07/an-update-on-lambda-tools/#respondMon, 23 Jul 2018 08:00:05 +0000https://jamesmckay.net/?p=4076A little under year ago, I started work on a new open source project to manage deployment of serverless code to AWS Lambda. This grew out of a task that I’d started at work, where we had a number of Lambda functions managing various features of our infrastructure. At the time, they were being managed rather chaotically through Terraform and I wanted to get a Continuous Delivery pipeline set up for them.

As I have since moved on to a new job, I thought I should probably say a word or two about it.

Use Serverless instead.

I was introduced to the Serverless framework by a colleague a few months before I left my last job, and I was immediately impressed. It does everything I’d envisaged for Lambda Tools, plus a whole lot more, and furthermore it is actively being developed by a full-time team with contributions from the open source community. As well as supporting AWS, it also supports Azure, Google Cloud Platform, and a whole lot more. The fact that Serverless is a thing saved me masses and masses of work on a project that I was struggling to fit in round everything else.

I’m particularly impressed by the way that Serverless works. Rather than manipulating AWS resources independently, as Terraform does, it works by generating CloudFormation templates. This makes things massively more robust than trying to configure different resources independently of each other. Since CloudFormation is built into AWS itself, and everything it does is transactional, you’re a lot less likely to end up with things getting out of sync with each other when making changes.

When I left the Parliamentary Digital Service, the WebOps team was still using Lambda Tools for most of their existing code, though I had started the transition to Serverless with a Continuous Delivery pipeline for one particular project. However I don’t know what their plans are for it in the long run.

As for myself, I don’t have any plans to develop Lambda Tools any further. We aren’t using a serverless platform at my present job and I don’t anticipate us doing so in the near future either. Even if we did, the fact that there is a mature and robust alternative means that I would be using Serverless rather than trying to carry on reinventing the wheel.

A note on performance reviews

In many organisations, you are required to complete some form of annual performance review process in which you agree some objectives with your line manager to be completed over the following twelve months. In the Parliamentary Digital Service, this was called the Individual Performance Review, or IPR.

For a while now, I’ve wanted to release an open source tool or library and build an online community around it. I’ve thrown a few things against the wall over the years, but nothing has ever stuck. But the Powers That Be thought that to do something like that would be good for recruitment, so I put it down as one of my IPR objectives for the 2017-2018 reporting year. Lambda Tools was the result.

On the face of it, it sounds like a good idea. You have a personal objective that is closely aligned with the objectives of your employer. Why not combine the two and pick up some brownie points for doing something that you’re passionate about anyway?

Unfortunately, it didn’t work out that way.

It was always viewed as a low priority by the rest of the team, who gave me little or no encouragement to keep working on it, and who weren’t well placed to pitch in and help anyway because they were Ops engineers rather than developers. In theory, we were supposed to have “10% time” to work on projects such as this, but while many other teams made full use of their 10% time, on my team it simply didn’t happen. As a result, I ended up doing most of my work on it on the train and in the evenings, just to have something to put down on my IPR form. It ended up feeling like a lead weight round my shoulders, and to then discover that something already existed that did everything I wanted it to do and more left me feeling thoroughly discouraged. I’m sure you would feel discouraged too if you’d discovered you’d spent a whole lot of your own time reinventing the wheel just so that you could tick a box on a form.

If there’s one lesson I’ve learned, it is this: if you have to set performance objectives at work, stick to what you can deliver in your 90% time. Annual performance review processes are nothing more nor less than bureaucratic enterprisey box-checking exercises that simply do not deliver the benefits that they claim to offer. Their feedback loops are far too slow. They suck the life out of everything they touch, and if you let them get their grubby paws on your 10% time or your pet projects, they will suck the life out of that too. Keep the beast locked up in its cage. Don’t let it rob you of your passion.

]]>https://jamesmckay.net/2018/07/an-update-on-lambda-tools/feed/04076Some thoughts on DevOpshttps://jamesmckay.net/2018/07/some-thoughts-on-devops/
https://jamesmckay.net/2018/07/some-thoughts-on-devops/#respondMon, 16 Jul 2018 09:00:57 +0000https://jamesmckay.net/?p=4026It’s now six weeks since I started at my new job, and I’m really enjoying it. Returning to .NET has felt like a homecoming in many ways. Even though I’ve been quite critical of some of the things that go on in the Microsoft ecosystem at times, it’s what has paid the bills for most of the past sixteen years, it’s a platform that I enjoy working with, and I’d built up quite a lot of experience and expertise in it in that time.

My two year hiatus from .NET was spent mostly in the world of DevOps and cloud computing with AWS. While I gained some valuable experience with it, I never really settled down in it. In particular, I was especially unhappy about being pigeonholed as a “WebOps engineer” on what our delivery manager insisted was “an Ops team.” I’m a developer, not an Ops guy, and besides, that kind of thinking completely flies in the face of what DevOps is supposed to be all about.

If you’re calling your team an “Ops team,” you’re not doing DevOps.

There’s a very good reason why DevOps is called DevOps and not OpsDev or Ops. It is Development first and Ops second. Or, if you want to put it a different way, it is about the Development of a software product to automate your Ops. Jez Humble, who wrote the book on Continuous Delivery, tells us that there’s no such thing as a DevOps team for a good reason. In the DevOps world, Ops is a software product, not a team.

This being the case, while you may need experienced Ops specialists to give you direction on what needs to be built, you also need experienced developers to build it. They need to have a thorough grounding in concepts such as design patterns, the SOLID principles, dependency injection, separation of concerns, test-driven development, algorithmic complexity, refactoring, and the like. You need to recruit, promote, plan, prioritise, and provide training accordingly. Otherwise you’ll either limit what you’re able to achieve, or else you’ll end up with unmaintainable code that needs to be rewritten. And when you’re dealing with infrastructure as code, a rewrite is far, far harder than when you’re dealing with business logic.

In any case, DevOps needs to be the responsibility of your development team as a whole. The whole point of DevOps is to break down the silos between Development and Ops, and to have a separate DevOps team (or worse, a separate Ops team) just creates another silo that you could be doing without.

]]>https://jamesmckay.net/2018/07/some-thoughts-on-devops/feed/04026Your Repository is not a Data Access Layerhttps://jamesmckay.net/2018/07/your-repository-is-not-a-data-access-layer/
https://jamesmckay.net/2018/07/your-repository-is-not-a-data-access-layer/#respondTue, 10 Jul 2018 09:00:34 +0000https://jamesmckay.net/?p=4004The Repository pattern has come in for a lot of criticism over the past few years by high-end .NET developers. This is understandable, because in most projects, the Repository layer is usually one of the worst-implemented parts of the codebase.

Now I’ve been critical of badly implemented Repositories myself, but to be fair, I don’t think we should ditch the pattern altogether. On the contrary, I think that we could make much more effective use of the Repository pattern if we just abandoned one popular misconception about it.

Your Repository is (mostly) not a DAL.

If you’re wondering what I mean, here is an example of a typical Repository method. It comes from BlogEngine.net, an open source ASP.NET blogging platform, and it is typical of the kinds of Repository methods that you and I have been working with on a daily basis for years:

Now this isn’t bad code. It’s actually quite clean code. It’s clear, well-formatted, and easy to understand, even if returning a ViewModel from your Repository does make me twitch a bit. But where is the data access logic?

There is not a single line in this code that tells me which underlying persistence mechanism is being used. Are we talking to Entity Framework? To NHibernate? To RavenDB? To a web service? To Amazon DynamoDB? Or to a programfor comparing human and chimp genomes? In just about every .NET project that I’ve encountered, the Repository classes are all populated with methods just like this one. They may contain some LINQ queries, but these won’t give me any indication either. Yet in every single case, they’ve been in projects called My.Project.DAL or something along those lines.

We’re sometimes told that the role of the Repository layer is to abstract away your data access logic from your business logic. But in methods such as this, the data access logic appears pretty thoroughly abstracted to me already.

No, this is business logic, pure and simple.

Why we’ve been thinking of the Repository as a DAL

The reasons why Repositories are viewed as a data access layer are purely historical. The classic three-layer architecture dates back to the late 1990s, when everybody thought that stored procedures were the One True Best Practice, and that moving your BLL and DAL onto separate hardware was the right approach to scalability problems that almost nobody ever had to face in the Real World. Back in the early days of .NET 1.0, your typical Repository contained method after method that looked something like this:

It was pretty much in-your-face that this was data access code. It was also very, very tedious and repetitive to maintain. It was this tedium that gave rise to modern O/R mappers, and in fact, in the early days, offerings such as LLBLGen Pro and NHibernate were sometimes actually referred to as “generic DALs.” Then, eventually, Microsoft got in on the act with Entity Framework.

In a nutshell, your data access layer is now Entity Framework itself.

Your Repository is first and foremost business logic

The problem with viewing modern-day Repositories as a DAL is that it demands that you draw a clear distinction between data access logic and business logic while obfuscatinig that very distinction.

I’m yet to see a clear, coherent definition of where the distinction lies. The nearest I can get is a vague and woolly concept of LINQ code as being data access on the grounds of equally vague and woolly concepts of IQueryable<T> being tight coupling. Now Mark Seemann makes some valid points in his blog post — LINQ is indeed a leaky abstraction — but what that means in practice is that if you run up against the leaks in the abstraction, you are dealing with inseparable concerns, which simply can’t be categorised cleanly as either business logic or data access logic, and have to be tested using integration tests rather than unit tests. Another example of inseparable concerns is where you have to bypass Entity Framework altogether to go directly to the database, for example, for performance reasons.

In fact, LINQ may be a leaky abstraction, but it’s a much better abstraction than any alternative you’re going to come up with. Once again, LINQ code gives you no indication whatsoever of what underlying data access mechanism you are actually using, and in many cases you can — and should — test anything you do with IQueryable<T> without hitting the database. In any case, query construction implements business rules and is therefore well and truly a business concern.

So what is the Repository pattern, as implemented in most projects, best for? Simple: a query layer. While query objects are a better choice for more complex queries, and extension methods on IQueryable<T> should be considered seriously for cross-cutting concerns such as paging and sorting, for simpler queries with only a few arguments each, a Repository is not a bad choice.

]]>https://jamesmckay.net/2018/07/your-repository-is-not-a-data-access-layer/feed/04004The Great Shorts Debatehttps://jamesmckay.net/2018/06/the-great-shorts-debate/
https://jamesmckay.net/2018/06/the-great-shorts-debate/#respondTue, 26 Jun 2018 11:00:37 +0000https://jamesmckay.net/?p=3946It’s the height of summer. The Great British Heatwave is in full swing. People are questioning whether their office dress codes are fit for purpose. And this means one thing in particular: The Great Shorts Debate.

Now I work for a company where I am allowed to wear shorts to work whenever I like without anyone so much as batting an eyelid. However, not everybody is so fortunate. Far too many workplaces are still saddled with stuffy medieval dress codes that demand that office workers turn up to work in winter woollies long trousers even in a heatwave. It seems that every year we see news reports of men and boys wearing skirts in protest against such bureaucratic and pointy-haired nonsense.

Regardless of what you think of such protests, there is no valid reason whatsoever why office workers should not be allowed to wear shorts to work.

I’ll just say it: corporate dress codes that do not allow men to wear shorts to work are discrimination, it’s as simple as that. Long trousers are sweaty, uncomfortable, restrictive and stifling in warm weather. They make you feel grumpy and irritable, while increasing the likelihood that you’ll spend the last hour of the day watching the clock for the minute you can get out the door and change into something more sensible. They lower productivity while contributing nothing whatsoever to the bottom line.

Nobody’s asking you to adopt an “anything goes” dress code here, with ripped cut-off jeans, tank tops, socks and sandals, or bare feet in client meetings. Even if you think that cargo shorts are too casual, you can still look crisp and clean in a combination of chino or tailored shorts with a polo shirt or a button-down short-sleeved shirt, ankle socks, and tennis shoes. As for places that require jackets and ties, they figured that one out in Bermuda a century ago. Besides, when even the BBC allows its weathermen to appear on national TV in shorts, what makes you think you have an excuse?

The only legitimate reason why shorts should be off-limits in hot weather is health and safety. If you’re working with dangerous chemicals in a laboratory, for example, you may need that extra protection. But in an office, you aren’t working with dangerous chemicals, so these constraints do not apply. By all means insist that your staff are clean, tidy, professional and sharp in their appearance. But insisting that they turn up to work dressed for winter in the middle of a heatwave is just silly.