Breaking Up the Nerd Dinner Monolith

Legacy apps are typically monoliths. Nerd Dinner started as a single ASP.NET website which handled presentation, business logic and data storage. Larger apps will be built as n-tier architectures, maybe with an ASP.NET front end and one or more WCF services in the back end. That's really just a small number of connected monoliths.

Monolithic designs severely limit your applications. They're time-consuming and complex to deploy, the large codebases are difficult to work with, and it's impossible to scale or update just one part of the app. If you have a feature which is performing badly, you can't just scale up that feature, you need to scale up the whole app.

Chapter 5 of Docker on Windows covers breaking up a monolith with a feature-driven approach, so each new release focuses on extracting or adding one feature. That means you're incrementally breaking up your legacy codebase, adding value with each release and not taking on a whole re-write.

This week I focus on a performance feature, making a synchronous database call asynchronous using Docker containers.

The Problem with Synchronous Database Access

When you create a new dinner in Nerd Dinner a bunch of data gets saved to the database, using synchronous calls. In the Dockerized version, the app container talks directly to the database container:

The code to save a dinner is in the DinnersController class, and (back in Chapter 3) it used to work like this:

That db.SaveChanges() call from Entity Framework looks straightforward enough, but it's hiding a lot of data access code. There are multiple lookups happening with SELECT statements, and new data going in with INSERT statements. This is all happening while the user is waiting for the page to update.

Worse, the database context object db is an instance variable in the DinnersController class - there's no using statement wrapping the data access. That means you're not explicitly controlling the scope of the database context, so you'll be hogging a connection from the SQL Server connection pool for the duration of the HTTP call.

SQL Server has a finite number of connections which can be open simultaneously - the default connection pool size is 100. That's why synchronous database access doesn't scale. Under high load, you can starve the connection pool and the next user who tries to save data will get a nasty error saying the app can't access the database.

To scale up your app with this architecture, you need to scale up your database alongside your web layer, which is typically not an elastic option.

Asynchronous Data Access with Event Publishing in Docker

The way you get scale here is by making the data access asynchronous - using a message queue to power a fire-and-forget workflow. The web app publishes an event to the queue saying a new dinner has been created, instead of writing to the database. Then the app returns to the user - publishing an event is a super fast operation and won't time out even under high load.

On the other end of the message queue is a handler which listens for events from the web app. The message handler makes the database calls when it receives an event. This architecture does scale - you can run hundreds of web containers, but only a handful of message handler containers, so the database never gets overloaded:

If there's a spike in traffic, events will build up in the message queue - but that's fine, because the message queue will have delivery service levels. Users may not instantly see their new data, but that's OK in this scenario (and in many others - eventual consistency is the trade-off for scalability).

It's easy to move to this architecture with Docker - you just run the message handler in a container and you also run the message queue in a container. I use NATS which is a great in-memory queue that's fast and flexible; if you need persistent messaging, you can use RabbitMQ or any of the queues listed in the CNCF landscape.

Running the queue in a container means you can use the same queuing technology in every environment, and your queue inherits the same service level as the rest of your app - in production you'd be on a multi-node Docker cluster so your queue automatically gets reliability.

Nerd Dinner Save Handler

The message handler code to save a dinner is really simple. I've changed the controller class in Chapter 5 so instead of writing to the database, the Create method publishes an event:

I'm using AutoMapper to map from the EF definition of a dinner to the POCO definition, so the object inside the message isn't part of an EF graph. And the MessageQueue class is a simple wrapper over the NATS client library.

The new code is tidier (and uses the IDisposable context object correctly), but it's essentially the same logic that was originally in the web app. I've pulled the feature out into a separate component - which is going to run in its own container.

It uses the same pattern as last week's Dockerfile, where the builder is a separate image which already contains the compiled code. Then it sets up the configuration settings for the app with environment variables, uses the console exe as the entrypoint, and copies in the compiled app from the builder.

The app image uses an ancient version of the Windows Server Core image - 10.0.14393.1198. That version has a default DNS cache setting which doesn't work nicely in a containerized environment, which is why I have the RUN command executing some PowerShell to disable the DNS cache.

You don't need to do this with recent versions of the Windows image, but it's a powerful feature of the Docker platform that I can build this Dockerfile 10 months after the code was pushed, and get exactly the same output.

Next Up

Next week I'm going to walk through the build process for the images in Chapter 5. It's all isolated in a single Docker image: ch05-nerd-dinner-builder. There's some complexity there to deal with building .NET Framework and .NET Core projects in the same solution, which I'll explain - and also look at the current situation.