Application scaffolding

You could use Visual Studio proper to create the project but in this tutorial we will use the .NET Core CLI to create our project and manage our solution and project files. This will help illustrate some of the components of the project one step at a time. Note also that I will be running these commands from a terminal on a Mac so mind your slashes if you’re on Windows.

Run the command:

dotnet new -h

You will now see a plethora of project options but for our project we will use the ASP.NET Core Empty template which has the shorthand name web with a name of quickstart.

dotnet new web -n quickstart

You will now have a subdirectory named quickstart that contains the basic files required for your project. Let’s keep on scaffolding our project before we continue on to the code.

Testing our code is always important and should never be forgotten so we’ll create an xUnit test project right now named quickstart-tests.

dotnet new xunit -n quickstart-tests

Same story as before, I have a folder named quickstart-tests and that folder contains the basic pieces for testing my project. Before this can be completely useful it will need to have a reference to the main quickstart project. That can be easily done with the following command (assuming you are in the quickstart-tests directory).

Those familiar with typical .NET applications will notice the absence of a solution file. The solution file simply contains references to project files, but since we’re using Visual Studio Code we will need a solution file so we get IntelliSense or code completion from OmniSharp. Creating the solution file and adding our project references is again accomplished through the dotnet cli. Make sure you are now in the directory at the root of your project.

We now have a quickstart.sln solution file at our project root with references to both our ASP.NET Core project and our test project.

Does it blend?

Now would be a good time to make sure everything is working on your development setup. Open up the project in the editor of your choice. Again, I’ll be using Visual Studio Code but feel free to use whatever you like. If you do open it with Visual Studio Code it may ask to add some required assets to build and debug, you want to select yes. That will create a .vscode directory that contains some basic files so the editor knows how to attach a debugger and build the project. For Code you can select the debug symbol (a bug with a line through it) and hit the green play button to start debugging. If everything went well you should have a page automatically opened in your browser that says … Hello World!

If you didn’t see that check your debug console or other tools to make sure the application started.

Creating an endpoint

Now it’s time to get into a little bit of code. As useful as Hello World is let’s make a simple controller and hook it up. Create a Controllers directory at the root of your project and create an InfoController.cs file. Paste the following code in:

Here we can see a very basic endpoint that will expose an /info endpoint that will return some JSON with some basic information. However, we will need to do some plumbing to hook this controller up. Open the Startup.cs file. The Startup.cs file handles some application registrations and configuration. The ConfigureServices() method is where you might do some service registration with the IServiceCollection (think dependency injection root). We will simply add the MVC middleware which will pick up our controller and create the /info route for us based on the attributes we provided in the controller. We will also remove the Hello World response.

For the astute, you will notice that by removing the Hello World! response that nothing will respond at the root of the application. We can repair that easily by adding a Controller that will respond to /. Add a DefaultController.cs to the Controllers directory.

It doesn’t do anything special but if you run the application you will have two endpoints responding, /info and /. Take the time now to run the application and verify that the endpoints respond as expected.

Testing

Every application needs to have automated tests available to easily verify functionality as new features or elements are added. For this example we will use xUnit with Microsoft’s TestHost TestServer. This TestServer will host your application in memory and allow you to hit your endpoints using a faux Http client.

These types of tests are integration tests and unit tests of course are possible but outside the scope of this article.

To begin we need to have a TestServer to interact with. The best way in my opinion to make this happen is to use an xUnit TestFixture so that your TestServer will be reused for all of the tests and then be torn down at the end.

Before we add the code we will need to add the dependent nuget packages. We will do that through the dotnet cli but you could use a nuget package manager extension in your editor if you like or you can simply copy and paste the packages into the .csproj file and then run dotnet restore. Run the following commands through the cli (if you are in the same directory as your desired csproj file to edit you can leave out which csproj it is and the cli will assume the one that is in the same directory):

You make get newer versions depending on when you run this tutorial but the PackageReferences are your nuget packages. This is similar to the packages.json file if you are coming from a NodeJS background.

Create a TestServerFixture.cs file in the quickstart-tests folderthat has the following code:

Notice that this class implements IDisposable and that will be called by xUnit once all of the tests have run. Additionally, on line 19 it is using a Startup class. That is the same Startup class used by the main program. So all of your service bindings and configuration will be part of your TestHost, pretty neat. You could also initialize some static data here because this will be ran before any of the tests run in a class.

Now we can move onto the tests themselves. You can just rename the UnitTest1.cs file to ControllerTests.cs. Generally speaking it’s recommended to group tests by their individual controllers or on the test fixtures that they require. Since we have some bar bones controllers I’ll group the tests into one file since they share the same test fixture. Here is the code for the tests:

Line 16 shows the retrieval of the faux HttpClient we can use to communicate with our in memory TestServer. On line 14 you’ll notice that we use a constructor parameter to get access to the TestServerFixture’s TestServer. That is automatically injected by xUnit because of the [Collection] annotation.

From there it’s just simple tests talking to our endpoints. You could imagine having different test fixtures for different purposes. Maybe you have a text fixture created where authentication is enabled, maybe another where authentication is disabled, or maybe another with different environment variables set. You get the idea. You get nice control of your testing environment and can run your tests quickly and efficiently.

Run dotnet test from within the quickstart-tests folder to run the tests and see their results. If they don’t pass look closely at the error messages.

Health and Metrics

Eventually this service will be deployed somewhere, in my case it will exist on Kubernetes where Prometheus is polling for application metrics. If you have other technologies in mind for this you can just skip this section.

To add a /metrics endpoint for Prometheus and a /health endpoint so Kubernetes can make sure the service stays alive and self heal we will use the App.Metrics packages.

Right now if you go and review the contents of the Program.cs file you will see that there is very little happening. This is the location where application bootstrapping can occur. There is some crossover with Startup.cs and many packages provide a way to do it in both locations but I think adding the metrics middleware is easiest in the Program.cs file. Run the following commands to install the necessary nuget packages (make sure you are in the quickstart directory, not quickstart-tests):

We only added lines 18–28. Run your application and look at the results of /metrics and /health. Very simple and easy to do.

There is more we could cover here, such as how to enable custom metrics and how to incorporate datastore checks into your health endpoint but that will be subject for other documentation.

Docker

Now that we have an application with a few endpoints we can look at creating a Docker container. First, understand that a Docker container is a package containing your application and what is required to run that application. You could think of it as extremely lightweight VMs if that is helpful, but the bottom line is you have an isolated environment or sandbox specific for your application.

Similar to how one barge carries many shipping containers that are isolated from each other a Docker container is an isolated application that will run on a shared host. Yes, like a VM but better. You don’t have to deal with the overhead of the OS and only worry about your application and its dependencies. You get much greater application density with the lack of overhead but yet still have the desired amount of isolation.

Create an empty file named Dockerfile without an extention at the root of your project. This file will describe to Docker the process for creating the container your application will live in. Consider the following file:

Let’s go through what is going on here together. This Dockerfile is leveraging something called multi stage Docker builds. In this example lines 1–10 build the application and lines 12–16 copy the compiled application and create the final Docker container.

The motivation for doing a multi stage build in this example is so that the building of the application is using an isolated environment and is completely repeatable regardless of what is installed on the host machine or your dev machine. This helps stop the well known “Well it works on my machine” errors.

With Docker containers, size absolutely matters! The smaller in size a container is the faster it can be downloaded and started. Generally speaking you want to have the smallest images possible. For this purpose you’ll notice that the FROM lines have the word alpine in their tag (the part coming after the :) which denotes they are based on Alpine Linux. Alpine Linux is a very small image that is only 10s of MBs.

The build image referenced on line 1 is much larger because it “contains” all of the dependencies necessary for building, which is far more than is required for running the application. Therefore, we copy the compiled application from the build container and put it into a smaller runtime container with lines 13–16. If you want additional detail on all of the keywords you can refer to Docker’s documentation.

Create your Docker container with the command (from the root of the project):

docker build -t quickstart:0.01 .

That will build your Docker container into a repository named quickstart with a tag or version 0.01. Running the command docker images shows something similar to the following:

You can see the differing image sizes and why we want to use the smaller runtime image to save on unnecessary dependencies. Let’s run it!

docker run -it -p 8080:80 quickstart:0.01

The -it means we are running it in interactive mode and -p is doing will forward port 8080 traffic from the host to port 80 on the container. In your browser you can now hit the application using http://localhost:8080.Feel free to hit any of the endpoints available, /metrics, /info, /health, /.

Now to deploy this image we could run docker push quickstart:0.01 but that would require access to a Docker repository. Once the container is there then you can use a container orchestrator such as Kubernetes to handle the rest of the deployment of the container. Handling that deployment onto Kubernetes is another topic.