How to build an controlled environment to distribute docker images based on user accounts

Docker itself, AWS (just to name the biggest docker hosts right now) and many more public / private repository servers are on the marked. But sometime there is need to host an own registry for docker images. One reason can be because we can, the other is for example to give individual pull / push rights to different images to different users and control the access also based on expiration dates.

Components and the big picture

For this setup we need several software components to work orchestrated together. Starting with the firewall to block all ports except the 443 for HTTPS, the nginxreverse proxy to terminate the SSL connection and protect the underlying services against direct access and also possible load balancing, the docker registry to host the images and at last but not least the docker token authenticator to identify users and give access to images (push and/or pull) based on their rights.

Docker introduced in the second version for the registry protocol the “Docker registry authentication scheme“. This basically transfers the access control to images to an outside system and uses the bearer token mechanism to communicate. The flow is to access an docker image is:

Docker daemon accesses the docker registry server as usual and gets a 401 Unauthorized in return with a “WWW-Authenticate” header pointing to the authentication server the registry server trusts.

Docker daemon contacts the authentication server with the given URL and the user identifies against the server.

The authentication server checks the access rights based on username, password, image name and access type (pull/push) and returns a bearer token signed with the private key.

Docker daemon accesses the docker registry again with the bearer token and the docker image request.

Docker registry server checks the bearer token based on the authentication server public key and grants access or doesn’t.

Firewall

Ubuntu ships with a very simple firewall control script called “Uncomplicated Firewall“. The script manages the iptable configuration and lets the user configure ports with a single line. If you access the server via SSH make sure you allow ssh access before you activate the firewall. I also recommend installing fail2ban to ban script hacking.

1

2

3

4

5

6

7

sudo apt update

sudo apt install-yufw fail2ban

ufw allow ssh#only necessary when you need remote access

ufw allow https

ufw allow http

ufw enable

ufw status

Nginx reverse proxy

We install Nginx also as a docker service because the update cycle is way faster compared to the software repository. The basic Nginx docker container is ready to be used and only needs the settings for http and https. Everything is handled via the https port but we also have http (port 80) open to have a redirect to https for everything with a 301 (moved permanently) return code.

1

2

3

4

5

6

7

8

FROM docker.io/nginx:latest

COPY default.conf/etc/nginx/conf.d/default.conf

COPY ssl.conf/etc/nginx/conf.d/ssl.conf

COPY cert/cert

EXPOSE80

EXPOSE443

This is a very simple Dockerfile to to add the ssl certificates and the http/https configuration. We could also mount the ssl and configuration in the docker-compose file and leave the images plain as it is. Both options are valid and just a flavour.

1

2

3

4

5

6

server{

listen80;

listen[::]:80;

server_name registry.23-5.euauth.23-5.eu;

return301https://$host$request_uri;

}

This is the http configuration for nginx. Accepting everything for http and returning a 301 (moved permanently) to the same server and path just with https.

SSL configuration

SSL configuration is a little bit more complicated as we also specify the ciphers and parameters for the encryption. As this topic is endless and very easy to screw up I personally relay on https://cipherli.st as a configuration source.

1

openssl dhparam-out dhparams.pem4096

The recommendation is to generate own Diffie–Hellman pool bigger than 2048 bit. This process can take a very long time. We add the result file together with our keys to the cert folder.

This configuration is based on the recommendation from cipherlist. Be aware one part of this setup is the Strict-Transport-Security with can cause a lot of long-time trouble if you mess it up. This completes the basic SSL setup.

And this configuration part for the registry server itself. Important here is the client_max_body_size parameter to make sure even bigger docker images are getting through. Older docker client versions getting a 404 because they can not be handled by the docker registry.

Lets encrypt

The easiest way to get a certificate is by using let’s encrypt. There are different ways how to receive a certificate, we just use a very simple one here with the standalone call. The certbot opens a mini web server on port 80 to handle the authentication request on its own. Therefore make sure the Nginx docker is not running.

Do the certificate request call for the auth and the registry certificate and copy the certificate and private key to your cert folder for the docker build to pick it up. Don’t forget the dhaprams.pem file.

Docker registry

Now as the server is configured and more or less secured, let’s configure the docker registry server and auth server. Docker inc. offers a docker registry docker container which is relatively easy to hande and to configure.

1

2

3

4

5

-REGISTRY_AUTH=token

-REGISTRY_AUTH_TOKEN_REALM=https://auth.23-5.eu/auth

-REGISTRY_AUTH_TOKEN_SERVICE="Docker registry"

-REGISTRY_AUTH_TOKEN_ISSUER="Acme auth server"

-REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/ssl/domain.crt

The configuration is done in the docker-compose file itself. The important information is the REALM, so the docker registry can redirect the client to the auth server with the issuer and the cert bundle from the referred auth server to check the bearer token later.

Docker Token Authenticator

Docker Inc. does not provide an auth server out of the box as done with the registry itself. This is basically left for the registry provider to build their own. Luckily Cesanta stepped up and build a nice configurable auth server to be used with the registry server. docker_auth has different ways of how to store information about the user.

Static list of users

Google Sign-In

Github Sign-In

LDAP bind

MongoDB user collection

External Program (gets login parameters and returns 0 or 1)

In our case the way to go is the MongoDB user collection as we can control for each user individually who has access to which image and easily change it on the fly by modifying the user data in the DB itself.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

server:# Server settings.

# Address to listen on.

addr:":5001"

token:

issuer:"Acme auth server"# Must match issuer in the Registry config.

expiration:900

certificate:"/ssl/domain.crt"

key:"/ssl/domain.key"

mongo_auth:

dial_info:

addrs:["authdb"]

timeout:"10s"

database:"23-5"

username:"ansi"

password_file:"/config/mongopass.txt"

enabled_tls:false

collection:"users"

acl_mongo:

dial_info:

addrs:["authdb"]

timeout:"10s"

database:"23-5"

username:"ansi"

password_file:"/config/mongopass.txt"

enabled_tls:false

collection:"acl"

cache_ttl:"10s"

This is the configuration file for the auth server. Mainly 4 parts.

Server

Witch port to listen on

Nginx handles the TLS termination, therefore, this server has no TLS handling.

Token

Use the same issuer as configured in the registry server itself and provide the certificate files for signing the bearer token.

Mongo_auth

Where the user information is stored, the password is saved in a simple ASCII file and how to access the MongoDB. In our case, as we are behind a firewall in a docker network we don’t use TLS to access thMongoDBDB.

ACL_Mongo

Beside the user information, the AccessControlList (ACL) can also be stored in a MongoDB. Same configuration as the mongo_auth but there is a cache information as this information is stored in memory and refreshed every 10 seconds.

The mongoDB was initialized by the docker-compose file with an admin user “root” and passwd “example”. We use this account to create a new database called “23-5” and set a new user there with username “ansi” and passwd “test”. This database stores all user and acls. The docker registry users by themselves are stored with an bencrypted password. and some labels. Bencrypt a passwd with:

1

2

sudo apt install apache2-tools

htpasswd-nB USERNAME

Beside username and password, we can also store labels of all kind to a given user. This allows us to use these labels for the ACLs again. So in our case, the ACLs defines all docker images with a given name (the name is stored in the label with read-only or full access) to access images based on their label. In our case, the user “waldi” has full access to all docker images with “test/*” and only read access to everything in “prod/*” but nothing else. ACLs have a seq number in which they were processed. The first patching ACL will be used.

Labels can be combined so for example:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

ACL:

{

"match":{"name":"${labels:project}/${labels:group}-${labels:tier}"},

"actions":["push","pull"],

"comment":"Contrived multiple label match rule"

}

USER:

{

"username":"busy-guy",

"password":"$2y$05$B.x.......CbCGtjFl7S33aCUHNBxbq",

"labels":{

"group":[

"web",

"webdev"

],

"project":[

"website",

"api"

],

"tier":[

"frontend",

"backend"

]

}

}

Would give push and pull access to the docker image

1

website/webdev-backend

These variables can be checked for the ACL:

${account} the account name aka username

${name} the repository name “*” can be used. So for example “prod/*” gives access to “prod/server”

Generating bearer SSL key

In order to sign a bearer token we need a key. This can be a self signed key done with openssl:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

openssl req\

-newkey rsa:4096\

-days365\

-nodes-keyout domain.key\

-out domain.csr\

-subj"/C=EU/ST=Germany/L=Berlin/O=23-5/CN=auth.23-5.eu"

openssl x509\

-signkey domain.key\

-indomain.csr\

-req-days365-out domain.crt

openssl req\

-x509\

-nodes\

-days365\

-newkey rsa:2048\

-keyout server.key\

-out server.pem

Docker-compose

We can configure and start the auth and registry server and nginx with one docker-compose file:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

version:'3'

services:

nginx:

restart:always

build:

context:nginx

ports:

-80:80

-443:443

mongoclient:

image:docker.io/mongoclient/mongoclient:latest

restart:always

depends_on:

-authdb

ports:

-3000:3000

environment:

-TZ=Europe/Berlin

-STARTUP_DELAY=1

authdb:

image:docker.io/mongo:4.1

restart:always

volumes:

-/root/auth_db:/data/db

environment:

-TZ=Europe/Berlin

-MONGO_INITDB_ROOT_USERNAME=root

-MONGO_INITDB_ROOT_PASSWORD=example

ports:

-27017:27017

command:--bind_ip0.0.0.0

dockerauth:

image:docker.io/cesanta/docker_auth:1

volumes:

-/root/auth_server/config:/config:ro

-/root/auth_server/ssl:/ssl:ro

command:--v=2--alsologtostderr/config/auth_config.yml

restart:always

environment:

-TZ=Europe/Berlin

registry:

image:docker.io/registry:2

volumes:

-/root/auth_server/ssl:/ssl:ro

-/root/docker_registry/data:/var/lib/registry

restart:always

environment:

-TZ=Europe/Berlin

-REGISTRY_AUTH=token

-REGISTRY_AUTH_TOKEN_REALM=https://auth.23-5.eu/auth

-REGISTRY_AUTH_TOKEN_SERVICE="Docker registry"

-REGISTRY_AUTH_TOKEN_ISSUER="Acme auth server"

-REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/ssl/domain.crt

I also added a mongoclient docker container to have easy access to the mongodb server. Please be aware this one is not secured by the nginx reverse proxy and is only for testing. You can also access the mongodb with command line:

The need

Calendar

I really like to plan the day in my calendar. Therefore I added a lot of external ical feeds like meetup, open-air cinema and for sure lauchlibrary. In order to decide on transportation I always have the weather underground page in a separate browser tab. This is very inconvenient, therefore I wrote a small script to get weather predictions via API call from wunderground and export an ical feed and update my google calendar with weather conditions.

Wunderground

Weather Underground is (or at least was for many years) the coolest weather page in the internet. Really great UI and a wonderful API to get current weather conditions and weather predictions for the next 10 days. Further more (and that is why I really really like it) users could send their own weather sensor data to the side to enhance the sensor mash network and get a nice visualization. Unfortunately the service is loosing features on a monthly basis and also the page itself is down for several hours every now and then. Very sad, but I still love it.

As I said they have a nice API to get weather forecast for the next 10 days on an hourly base. OK, we can all discuss how dependable a weather prediction for a certain hour in 8 days is, but at least for the next days it is really helpful. I am using the forecast10day and the hourly10day API endpoints to get a nicely formatted JSON document from wunderground. If you want to run this script for your own area you need an account and an API key as the calls are restricted (but for free).

PWS

My favorite Maker-space (Motionlab.berlin) has an epic weather phalanx (as I love to call it) and sends in local weather conditions to wunderground. Therefore I can ask beside weather conditions in a city for weather conditions based a certain weather reporting station. In our case its the IBERLIN1705 station. Check out current conditions here.

Forecast10day

The API call to http://api.wunderground.com/api/YOUR-API-KEY-HERE/forecast10day/q/pws:IBERLIN1705.json returns for each day of the next 10 days information about humidity, temperature (min/max), snow, rain, wind and many more. I take these data and create one calendar entry each morning at 06:00-06:15 with summary information for the day. Specially for days beyond the 4 days boundry this condition is more accurate then an hourly information. Getting this information in python is very easy:

I am using requests to make the REST call and parse the “content” value with json loads. Easy as it looks. The data var contains the dictionary with all weather information on a silver tablet (if the API is not down, happens way to often).

Hourly10day

http://api.wunderground.com/api/YOUR-API-KEY/hourly10day/q/pws:IBERLIN1705.json contains the weather information on an hourly basis for the next 10 days, So the parsing is very similar to the forcast API call. I am specially interested here in rain, snow, temperature, wind, dewpoint and UV-Index as these are values I want to monitor and add calendar entries when they are outside a certain range.

Wind > 23 km/h

Temperature > 30 or < -10 C

UV-Index > 4 (6 is max)

Rain and Snow in general

(Temperature – Dew point) < 3

Humidity in general are not so important and highly dependent on the current temperature. But dew point (“the atmospheric temperature (varying according to pressure and humidity) below which water droplets begin to condense and dew can form.”) is very interesting when you want to know if it is getting muggy. Even when it is 10 C a very low difference between temperature and dew point means you really feel the cold crawling into your bones. 🙂

Ical

To create an Ical feed I use the icalendar library in python. Very handy to create events and export them as an ical (XML) feed.

Summary will be the text your calendar program displays when displaying the calendar itself, while description will be displayed then showing calendar entry details. “dtstart” and “dtend” mark the time range. For the timezone I use the pytz library. “to_ical()”. That’s basically all you need to create an ical feed.

Google

The google calendar can import and subscribe to calendars. While import adds the calendar entries to an existing calendar once (great for concerts, public transport booking), subscribe creates a new calendar and updates the feed every > 24 hours. This is great for long lasting events like meetup or rocket starts but weather predictions changes several times per hour. Therefore I added a small feature to the script to actively delete and create calendar entries. So I can do it every 3 hours and keep the calendar up to date.

As always google offers nice and very handy API endpoints to manipulate the data. Beside calling the API Rest endpoint by hand there are libraries for different languages. I use the “googleapiclient” and “oauth2client” to access my calendar. First step is to create a new calendar in google, then active the calendar API in the developer console and create an API key for your app. The googleapiclient takes care of the Oauth dance and stares credentials in a local file.

Python

1

2

3

4

5

6

7

8

store=file.Storage('token.json')

creds=store.get()

ifnotcreds orcreds.invalid:

flow=client.flow_from_clientsecrets('credentials.json',SCOPES)

creds=tools.run_flow(flow,store)

returnbuild('calendar','v3',http=creds.authorize(Http()))

If you call this function the very first time to requires the OAuth dance. Basically call a webpage and give access to your google calendar. The secreats are stored in the token.json file and reloaded every call.

“getService” calls the upper function to get an access object. “events().list().execute() request a list of the first 100 calendar entries and “events_result.get() returns an array with all calendar entries and their details. “service.events().delete().execute() removes these entries.