Docker Community Forums

I’m trying out a docker cloud with UCP in our datacenter but I still got some question, that I coudn’t find an answer for.

When I got a webserver container running and I scale it, so I get
for example 3 webcontainers, how is the traffic handled? Is there some
kind of load balancer? Where does the traffic ‘enter’ the cloud to find
the webservers inside my docker cluster?

How does the client knows where the container runs? How does he knows the ip of the webserver?

When I’ve got 2 containers that run a webserver on one host they
both need to expose port 80. I know you can map 80:80 for container 1
and 81:80 for container 2, but how will that work? When the client knows
the IP address and enters it in his webbrowser it will find the :80. I
never saw somebody specify port 81 inside his browser, how can this be
’fixed’ ?

In UCP how do you specify on which host the container will be
running? Is it done by adding label constraints to the container? And
how do I add labels to hosts after they are created?

Can you automatically upscale the amount of for example webservers when the demand is high? dynamic scaling?)

When I launched one container on let’s say AWS cloud and I want to
bring it back to the on premise esx hostn is this possible?

Right now I’ve installed UCP, but I was wondering if there are alternatives, since the features of UCp are rather limited.

I know these are many questions (and maybe stupid ones), but I couldn’t find a clear answer yet.

UCP doesn’t provide a load balancer for you. You can run something like interlock though: https://github.com/ehazlett/interlock. Interlock watches the docker events stream and can then set up an haproxy or nginx config for you that does the heavy lifting.

When you look at a running container in the web interface or in the docker ps output, you should see the public IP that a published port is accessible over. Instead of seeing something like 0.0.0.0:443->443/tcp, it’ll be 1.2.3.4:443->443/tcp, assuming 1.2.3.4 is the actual IP address of the node that is running the container.

run a load balancer and have all your traffic handled through that. Alternatively, you can run as many containers that need port 80 as you have nodes with IP addresses not listening on port 80.

label constraints are the best way to do this, yes.

I don’t believe UCP has this built in, but you could implement a component that watches the load and does the scaling for you. The tricky part here is determining when your application is under high load. High memory usage? high cpu usage? something else?

You can stop and remove existing containers and schedule new ones with new constraints. In the future, live migration of containers will likely be possible. There is the CRIU project that allows checkpoint and restore functionality for linux userspace processes. The runc project has direct support for this approach. Docker 1.11 is a somewhat significant refactor of docker that switches out the libcontainer stuff for a runc backend. As future releases go forward, this and other runc features will be made available to docker. I don’t know specifically if the criu support is targeted for a specific version of docker.

UCP still is young. It is moving quickly, so it’s definitely something to keep your eye on. The alternatives that come to mind would be swarm without UCP, mesos, or kubernetes.

If there’s anyone that knows more about this stuff, feel free to correct me.

Autoscaling is something we are looking at for the future. We definitely appreciate community input on what kind of policies you would like to use to determine the scaling, as Jeff pointed out.

As Jeff pointed out, UCP development is moving very quickly and we appreciate any feedback you have on the kinds of features you are looking for. Feel free to reach out to me via direct message if you want to discuss more.

Thank you for the replies, it certainly cleared up some things for me.
For question 6:
When I launched one container on let’s say AWS cloud and I want to
bring it back to the on premise esx hostn is this possible?

When I stop the container and relaunch the image again in a new container. all my changes will be lost, right? The only way to prevent is, is to commit the containers image while it’s still running and relaunch a new container with that commited image. This cannot be done from inside UCP if I understand corectly?

Normally, I would expect that your state should be kept in a volume instead of a write layer. You would have to introduce a multi-host volume driver (like flocker) to enable making a volume available on multiple hosts.

You definitely can, however, use docker commit to commit a write layer, push that image, and then start another container with your new image on another host.