How to build some great apps with .Net, Javascript, Angular.

PostgreSQL, often simply “Postgres”, is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data, securely and supporting best practices, and retrieve it later, as requested by other software applications, be it those on the same computer or those running on another computer across a network (including the Internet). It can handle workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users. Recent versions also provide replication of the database itself for security and scalability.

This image includes EXPOSE 5432 (the postgres port), so standard container linking will make it automatically available to the linked containers. The default postgres user and database are created in the entrypoint with initdb.

Environment Variables
The PostgreSQL image uses several environment variables which are easy to miss. While none of the variables are required, they may significantly aid you in using the image.

POSTGRES_PASSWORD
This environment variable is recommended for you to use the PostgreSQL image. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the POSTGRES_USER environment variable. In the above example, it is being set to “mysecretpassword”.

POSTGRES_USER
This optional environment variable is used in conjunction with POSTGRES_PASSWORD to set a user and its password. This variable will create the specified user with superuser power and a database with the same name. If it is not specified, then the default user of postgres will be used.

PGDATA
This optional environment variable can be used to define another location – like a subdirectory – for the database files. The default is /var/lib/postgresql/data, but if the data volume you’re using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.

POSTGRES_DB
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.

Step 2: Make sure the PostgreSQL Docker container is up and running with:

$ docker ps -a

Step 3: Test connection from your host system

Now, we will build an ASP.NET Core MVC application that performs basic data access using Entity Framework. We will use migrations to create the database from our model.

Create the context for Entity Framework Core

First, create a folder named “Repository” by right clicking on the project and choosing the correct option from the menu. Next, add a class to the “Repository” folder named “StoreDbContext.cs”.
Open the file after creating it and add some code to have the class inherit from DbContext like so:

Add the connection string

The connection string can be added in the OnConfiguring method in the class which implements the DbContext, or via dependency injection in the constructor using the options. The connection string property User ID needs to exist in the database and should have the create database rights.

This piece is necessary since we’re planning to use code-first EF migrations. The “Database.Migrate()” piece is actually responsible for two things:

Creating the database in PostgreSQL if it doesn’t already exist

Migrating the DB schemas to the latest versions

Entity Framework does not do this automatically, so this piece is necessary.

Setting Up Code-First Migrations

Let’s add support for migrations to our project now. Remember in the previous section where we needed to edit the project.json file and added the “EntityFrameworkCore.Tools” JSON to the tools section? This is the part where that step comes in handy.

Tools –> NuGet Package Manager –> Package Manager Console

Run Add-Migration InitialCreate to scaffold a migration to create the initial set of tables for your model.

The output should reflect “To undo this action, use Remove-Migration.” That means we’re good. This option is available because we wired up the appropriate tool in the project.json file. If everything went well, you should have seen a new folder named “Migrations” created in our project along with two files: StoreDbContextModelSnapshot.cs and _InitialCreate.cs.
These files will be used when our application starts up in order to create a DB (on the first run) that is capable of housing our service’s state. However, we still need to wire up both these migrations and the actual StoredDbContext within .NET Core’s Kestrel engine.

Create a controller

Next, we’ll add an MVC controller that will use EF to query and save data.

Right-click on the Controllers folder in Solution Explorer and select Add ‣ New Item…

The Program.cs file

We ended up with the following code for the Program.cs file. The interesting part is the UseUrls() which I didn’t have while trying to make it run with Docker, then it wasn’t bound to the right network, and the application wasn’t accessible outside of the Docker container.

Dockerfiles

Each Dockerfile is a script, composed of various commands (instructions) and arguments listed successively to automatically perform actions on a base image in order to create (or form) a new one. They are used for organizing things and greatly help with deployments by simplifying the process start-to-finish.

Line 1: FROM directive is probably the most crucial amongst all others for Dockerfiles. It defines the base image to use to start the build process. It can be any image, including the ones you have created previously. If a FROM image is not found on the host, docker will try to find it (and download) from the docker image index. It needs to be the first command declared inside a Dockerfile.

Line 4+5: The ENV command is used to set the environment variables (one or more). These variables consist of “key = value” pairs which can be accessed within the container by scripts and applications alike. This functionality of docker offers an enormous amount of flexibility for running programs.

Line 8: The COPY instruction copies new files or directories from and adds them to the filesystem of the container at the path .

Line 11: The WORKDIR directive is used to set where the command defined with CMD is to be executed.

Line 14: The RUN command is the central executing directive for Dockerfiles. It takes a command as its argument and runs it to form the image. Unlike CMD, it actually is used to build the image (forming another layer on top of the previous one which is committed).

Line 17: The EXPOSE command is used to associate a specified port to enable networking between the running process inside the container and the outside world (i.e. the host).

Line 20: The ENTRYPOINT argument sets the concrete default application that is used every time a container is created using the image. For example, if you have installed a specific application inside an image and you will use this image to only run that application, you can state it with ENTRYPOINT and whenever a container is created from that image, your application will be the target.

Building an ASP.NET Core Image

Step 1: Open the “Docker Quickstart Terminal” from the start menu.

After a few seconds your Docker terminal should be ready. We can run the docker images command to see all the images:

$ docker images

Step 2: Build an image from a Dockerfile

We always have the dockerfile for our custom ASP.NET Core ready to go, we need to convert that dockerfile into the image by running the docker build command. Let’s have a look this build command:

We’ll change current path to project root:

$ cd /D/VS2015/Docker/PostgreApp/src/PostgreApp/

and ensure that Docker can see our project directory:

$ ll

We’ll check the directory listing of the root directory of a typical Linux file system by using:

$ ls

So lets take a look how we can build and run our docker image.

$ docker build -t postgre-dotnetcore .

Check the image has been created correctly and is present in our Docker machine:

$ docker images

Okay cool, our image ready to run.

Now the next we’re going to do is we are going to fire up a PostgreSQL database, and we’re going to give it a name and any time we want to link a container to another container simply give it a name and then we can reference it by that name.

Alright, Lets run docker ps, and there we go, we can see that is now up

Containers can be linked to another container’s ports directly using -link remote_name:local_alias in the client’s docker run. This will set a number of environment variables that can then be used to connect.

We need the docker run command to build container, then we’re going to link it again the postgre database.