Træfik works pretty well as an automated https proxy as well: https://traefik.io/ It is still missing a caching feature though, so it might not be a good fit for everyone. It has a Docker backend which works with Docker labels (much the same way the https-portal project uses environment vars).

Traefik is really cool for most usage, I tried to used it in production and here are some of the shortcoming I found (the 2 first issues are on github):

- If you use DNS for a blue-green deployment, traefik does not honor DNS ttls (because it uses the DNS of go and go might do some caching) so when you do the switch you might still end on the "old" environment

- An error on one of the cool feature: serving error page when your backend returns some HTTP code

- Some configuration options are not documented (but easily found using github search)

Use go for what? DNS resolution? The default behavior is to use the system DNS resolver. The Go resolver will be used if the system is resolver is not avaialble (e.g. if the binary is compiled as pure go) or if the net.Resolver has the PreferGo flag set (which is false by default).

Is it possible for me to dynamically configure Traefik for non-Docker environments? I would like to run a Traefik instance which acts as a https-proxy for arbitrary domains,hosts and ports - without having to restart at all.

As long as you're using one of the dynamic backends that Traefik hooks into, such as Consul for example, you can do this pretty easily.

I do this exact thing with Traefik, Consul, and some various miscellaneous services that don't fall under the dynamically-discovered umbrella when they might run under k8s or nomad. I write a value to Consul with the desired Host: match, a backend server to route requests to, and Traefik will handle routing to any backend server it can reach without restarts or interrupting service.

Compose files can be; however they're used in conjunction with Docker Swarm - and when this is done, certain features are made available while others are not. Networks would be used in this case rather than the `link` directive.

Sure, you could use compose; but it's like using a brick for a nail instead of a hammer. Swarm is just a better tool for Production deployments, and compose is better for development - and since it's so simple on a single node, there's no reason not to, and then you're better setup for multi-node and the best practices surrounding it down the road.

As another commenter mentioned - there's often confusion on using Docker in development vs. production. I think this comes up a lot since people always talk about how "Use the same image everywhere!" when that's not always the case. In development you may be using bind-mounts and attaching your working directory into the container so you can actually edit your files, you may be running the application with nodemon (Or your langauge-equivalent) to monitor changes, or do whatever it is your application needs, maybe you have separate Dockerfile's as you want a more slimmed down Production image. But the idea is more that you bring the dev/production parity closer not necessarily identical. You can now ensure all your developing teams are using the same version of your language, and that things are configured exactly the same, and when you do go into Production - that when the image is built, it will run exactly the same on your CI server as it does in on your Production node. It allows for more consistency. But things aren't necessarily 100% the same between development and production.

I see Docker for production and Docker for development as two different worlds. I think containers can work well for everyone in development. Not everyone needs to use it in production. If you are using Docker Compose for production, then you might not need the advantages of Docker for production (even though it's still useful for development.) If you Google around for production setup of applications, you'll see what a typical flow looks like. I think key points are automation and spreading your resources. Docker compose doesn't address these points.

This is a great project, and exactly what I was looking for. Now doing manual cert renewal and looked into using jwilder's docker-gen to automate, but that was an involved process. This brings it all together. Thank you!

I am running RancherOS and Rancher where I have everything as docker images. Easily manageable and updateable. Using docker-compose automation, but in my setup with nginx as reverse proxy I have a lot of configuration options, and am a programmer at heart, not a DevOps guy. Don't know yet if I can use this out-of-the-box, but it will certainly save me a lot of time having it all together.

I don't get it either even if one uses Docker. I use jails on FreeBSD but, otherwise, I have the same set up that does the same thing by just configuring what comes with all that so I don't understand the point of this.

You only do it once. Then you do everything to this tool that it, too, will need; adjust as necessary. When you do your own config, you know what's going on and how to fix and adjust. And it's one less tool you need to learn and maintain.

I think you're talking about something else. I'm talking about set up and configuration of these tools and servers. Other people's scripts may read those configuration files and apply them but it is the config itself that I do on my own. Even then, I sometimes edit those build scripts if they don't do what I want and I just need something customized. If some other tool set all that up for me, I'd never know how to do it or I'd have to learn it anyway, negating the need for this tool in the first place.

IP blocking is ineffective[1] and can have noticeable collateral damage[2]. As has been discussed at length in discussions about tracking people for copyright protection/enforcement reasons, IP addresses are not a suitable method of identifying/locating/tracking people.

[1] they could just connect from somewhere else next time: mobile instead of home network, coffee shop wireless, day-job guest wireless, train/bus wireless, friend's wireless, tethered to friends' mobile connection, ... - in fact the poster may have used an alternate location this time just-in-case[3].

[2] if a public address/range is blocked, or a range used by a large employer or educational establishment, that could affect many innocent users while not affecting the problem user.

[3] people determined enough to be dicks that they take time to sign up for a throwaway account, are likely determined enough to post from another location and dickish enough not care if other people using that location get blocked because of their action.

i know of one example, which was admittedly beyond silly (caddy wouldn't start when lets encrypt had an outage), and they back paddled very quickly after it blew up, and provided a fix within hours iirc.

your comment however paints a different picture... if you know other examples, please enlighten us.

The issue you're aware of is when I became aware of the project (because of the outages and ensuing coverage here on HN).

My issues are:

- that problem was caused not by a 'bug' but by a deliberate decision that the author made, and defended. I wouldn't use the term 'back peddled' so much though: Even after relenting and changing the allowed expiry window, the behaviour was essentially the same, albeit with a smaller failure window. AFAIK Caddy to this day won't start with a valid certificate if it can't renew it, if the expiry is within 7 days.

- before the TLS-SNI-01 challenge was disabled on the LE side, Caddy's behaviour was to randomly select a challenge to use. When I questioned the author about this his response was "oh well that would be too much load for LE". The problem there is that the ACME protocol handles this exact scenario - you query the ACME server to find out which of the challenges you want to use, it will allow you to continue with. As I pointed out at the time, I personally wasn't aware of this before the TLS-SNI issue, but as demonstrated by his response, neither was he. If his project's entire reason for being is registering certs via ACME, wouldn't you expect him to know the basic information/control flow when requesting a cert?

- the author has no apparent concept or understanding of 'separation of concerns' - apparently anything that isn't a single file go binary is all too much for the world to bare, regardless of the disastrous results his approach has given the world. I'm all for competing approaches to solving a problem, but when someone essentially ignores any competing approach that doesn't fit their existing narrative of "I have the sole solution to this problem", it's not a sign of someone who is really interested in solving the problem, it's someone interested in pushing their solution. That's fine for a salesman, I guess. Not so much for an 'engineer'.

Edit:

- the 'ads via headers' stunt was just plain weird and creepy, and shows that the author and his partners have no real sense for business. This is demonstrated in their 'Basic' paid support, which states: "We usually respond within 1-2 business days.". USUALLY!? You're paying $25 a month per instance, so you.. I dunno, don't have to install certbot once, and your support response timeframe includes the term "usually". Sweet fucking Jesus.

But recently though, I grew tired of handling 2 or more docker images, then I discovered https://github.com/Valian/docker-nginx-auto-ssl. One image, sets up in a single command without volume sharing complexity of jwilder/nginx-proxy. One caveat though is the larger cpu overhead (from lua) when handling high volume or high reqs/sec.

- SSL: You can put a bunch of things behind a reverse proxy and you can put all your SSL stuff in one place, which makes it a lot easier to deal with, secure and manage.

- Load balancing: If your application needs more than one host, you need some way of distributing requests to multiple hosts. A reverse proxy is one of the easiest ways of achieving this (and comes with above SSL handling as well)

- Caching: They can be very good for caching dynamic but rarely changing resources like news articles. They can also take care of requests for static assets so that your application servers don't have to.

- Multiple apps on a single IP: At the other end of the spectrum, if you have for example a home server, only one application can listen on a given port and you might want to run multiple applications responding to different hostnames. A reverse proxy lets you do this.

From the point of view of SSL, they make it easy to bolt SSL support onto existing infrastructure with minimal changes to everything else.

In some cases the separation of duties is useful too: if I change the back-end of one of my applications from Apache+PHP on one server to a new implementation in node.js elsewhere, I don't have to worry about implementing SSL (and caching if the proxy is also used for that purpose) in the new implementation or even needing to change DNS and other service pointers, I can just direct the proxy to the new end-point(s).

For larger organisations (or individual projects) this separation of responsibility might also be beneficial for human resource deployment and access control: keeping the proxy within the security/infrastructure team for the most part and the app deployment/development with specific teams.

> They are unnecessary in most cases

I agree. But that doesn't necessarily mean they are not beneficial in a number of cases.

> and bring more conplexity

Though also spread out that complexity, which can help in the management of scaling up and the maintenance of larger scale once there.

Obviously the utility of this very much depends on the project/team/organisation - it is very much a "your mileage will vary" situation.

I've often pondered about whether it is useful to use one or not. I came to the conclusion that it's always a nice add-on, and eventually it always helps. You have highly efficient static file serving, adding HTTP-Headers is a no-brainer, same goes for typical stuff like HTTP->HTTPS redirect, basic logging, error pages independent of your app... Also don't forget that it's an additional layer in the setup that adds extra security because it's an extra layer. ;-)

It would be nice to have one in Go or Rust if it had all of nginx' features, performance and documentation/community support.

I agree with you but it’s worth acknowledging that setting capabilities and making sure they persist across updates (e.g. your example breaks the first time a package update is installed) isn’t always trivial, especially in bureaucratic enterprise IT environments, and although the risk is lower an attacker could still potentially find interesting things to do using other low ports unless you’ve also setup something like authbind to limit it to just port 80/443.

On top of other comments, I'll add you can do them more securely. High-assurance security often used proxies to enable support for legacy apps since the proxies could be clean-slated using rigorous techniques. The legacy systems were often closed-source, way too complicated, or (eg Microsoft) deliberately obfuscated. They already mentioned SSL/TLS. Another example is putting a crypto proxy in front of Microsoft Outlook that communicates with a mail guard. Can scan, encrypt, etc email with little or no work on the client.

Can do (improvements here) with little to no change to (existing app) is the recurring pattern.

They where popular way before microservices, only less dynamic. Mostly from an operational perspective they give a single entry point which can be better secured and monitored. It also allows to easily decouple unstable backends from your user so even though functionality is broken (404), the user experience doesn't have to suffer (serve stale cache, respond with a proper branded error page which helps the user forward (instead of a cryptic app specific error page).

Example : this company had part of their site built by some abstruse static site generator with a million dependencies written in a language that no one at the company had installed by default, nor knew (I think it was Ruby). We put it all in Docker, and the README changed from 10 lines of install x, y, z version foo to /random, to just “run docker build, then docker run.” Most of the people working on those docs didn’t need to know about ruby or gems or lock files or any of that.

A year later I overheard someone say thank god that thing was dockerized because I was absolutely not looking forward to installing all those dependencies just for a typo fix.

It's not, it solves a different issue. If you only have Python as dependency, requirements.txt are fine (well, user needs to install correct version of Python, pip and virtualenv / pipenv, but that's doable). But as soon as your app is actually composed of nginx / apache, python, some background process in Rust, bash scripts for cron jobs,... then you have a problem with app delivery, which Docker solves nicely. Just package eveything in a Dockerfile and distribute the image. Bonus point: you can now test it locally, with the same installation.

I jumped on Docker wagon very early for this exact reason. I don't care about hype, but it does solve these kinds of problems.

One of issues that `docker-compose build` solves better is that venv is superconfusing to your non-python users. I've had people:

- skip it and do `python setup.py install` instead
- skip it and install requirements globally `pip install -r requirements.txt` because thats how it worked for them last time they used python
- version control their `venv` directory
- backup and upload their `venv` directory on new server when moving

Now they just do `./appctl setup` that asks them for few things, writes `setup.env` file for docker-compose, and then runs `docker-compose build`

But this is only beginning. Thanks to `docker-compose` that `build` step also installs:

The TL;DR is a typical web app is often a lot more than just running Python, and there's a huge amount of value in being able to run the same code across different operating systems without any installation changes.

The OP says app delivery but they actually talk about development builds. Dev builds are another area where Docker/containerisation helps. Dev builds are just as annoying with Golang as Ruby or Python. In fact build awkwardness is orthogonal to static vs dynamic typing.

Hm, I'm pretty sure I can take any Go project out there and build it in a matter of minutes on my current OS (depending on the build time of course). Just a matter of setting the environment on the right compiler binary and retrieving dependencies (which just came better 2 weeks ago with 1.11's modules).

Interpreted languages have their uses, of course, but there's a clear difference in both ease of deployment and development between the realms of interpreted languages (Ruby/Python/PHP/etc.), VM/IL/JIT based languages (JVM/.NET/etc.) and 'plain old' compiler based languages (C/C++/Rust/Go/etc.).

Just a matter of setting the environment on the right compiler binary and retrieving dependencies

Which is exactly how it works in Python.

I'm pretty sure too I'll very quickly run into dependencies hell by picking randoms ruby/python programs. Just take a look at that SO question:

That SO question is about a tool for "setting the environment on the right compiler binary and retrieving dependencies". But you can also do it manually just fine; that's how I usually did it, in fact. Just retrieve the dependencies to a directory and call Python with PYTHONPATH set to it.

Pyenv is essentially the same as Go's new modules tool, it's just not called through the same binary as the interpreter/compiler.

Interpreted languages have their uses, of course, but there's a clear difference in both ease of deployment and development between the realms of interpreted languages (Ruby/Python/PHP/etc.), VM/IL/JIT based languages (JVM/.NET/etc.) and 'plain old' compiler based languages (C/C++/Rust/Go/etc.).

Yes; with Python, you can both run directly from source without wasting time compiling after each change, or you can produce a standalone binary (dependent only on libc) for distribution, using a tool like PyInstaller. Plain old compiled based languages are much more limited.

Yes, you can have everything in every language. After all it's turtles all the way down.

What makes us successful as software engineers, in the end (and IMHO), is the efficiency level at which we reach our goals. To my eyes, when deliverability is the goal, then a language where native (cross)compilation is first class citizen is the most efficient path. But don't get me wrong, Python is really great and still a good choice for that scenario (I'm thinking of dropbox, for ex).

Mainly because I currently work on long term projects, building entreprise grading systems. Here you want simple, dumb technologies for which you'll still be able to put up build OS in the years to go.

I mean that I am knowledgable about Python builds. You are knowledgable about Golang. Neither skill is special. With Docker I don't need to learn as much about FooLang's build process to contribute to a project using FooLang.

Thanks, it gets clearer. As a matter of fact I'm both knowledgeable in building things in Python and Go, Node and PHP too. And others. But, admittedly, for 'big' things I tend to lean on Go nowadays (edit: mainly because I need to deliver on-premise, OS-aware applications).

It seems like you think there's a "build process" with Go because you're accustomed to python/pyenv (or whatever is your env manager of choice), in which cas you need to known that there's no such thing with Go.

The two following course of actions are the same:
- "apt-get docker; docker pull {project-url}; docker start {project-url}"
- "apt-get golang; git pull {project-url}; go run .go" (the last Go 1.11 will take care of dependencies at build time)

You can't as easily do "apt-get python; git pull {url}; python .py", unless the program is really simple, because there's such questions as "python2 or 3?" or "pyenv, virtualenv or anaconda?", obligatory XKCD here: https://xkcd.com/1987.

But it's totally ok not to bother learning the compilation command of a given project's language and rely on Docker, really I'm ok with that, doing it myself from time to time. My initial point was on app delivery to final users.

You're right that the multiple solutions for Python make things confusing and prevent you from using the same method for every project, hopefully we'll start to standardize on pipenv, which makes the process:

A bash script would have to automatically install all the dependencies. Over time, there is a growing chance that some of the versions required will conflict with whatever is already installed on the machine, and someone will have to go in and fix them... then fix them in a way that it works on everybody's machines.

With docker, you can just ignore all that. As long as there is a single person capable of updating the dependencies on a single machine with docker, it'll work the same everywhere, always.

If everybody in your team uses same distribution of Linux (say latest Fedora, for example), then all dependencies can be packaged into RPM or installed from system repository. RPM and docker are very similar in general idea: they are both image of result of installation of a program(RPM)/system(Docker).

Every member of team will need to run same version of Fedora directly or in Vagrant or in (ta-da) docker container.

From my experience, it turned out that for small Dockerfile's, RPM packages are unnecessary burden, so usually I just start with just docker. But later, when Dockerfile growths, it much harder to track dependencies and installation instructions, when they are interleaved, so it much easier to return to individually packaged programs and replace almost whole content of Dockerfile with single "dnf install meta-package" command.

I was like that and ignored Docker for a number of years. The app I'm working on has one database, one queue, one API server and one web front end. Some of these will scale when in production, but for eveloperment it's quite simple. I used to just run them at the command line and get on with development.

But using docker has been an eye-opener. I can now get my front end developer to have a fully working database, queue, API server running with a simple `npm run docker`. They are free to develop the UI without having to worry about the underlying stack. I am free to change that stack without having to worry about telling them what commands to run on their laptop to get it up to date.

As we add more services, such as email sending, image processing, video compression, it wont require anything special to configure from the front end perspective.

We're now moving it into the cloud and are now seeing the benefits of Kubernetes with our docker containers to deploy, have a blue/green deployment, not worry so much about what the servers are doing but focusing on deployments. This is very clever stuff and will reduce our hosting cost without having to duplicate infrastructure.

So yeah, I didn't get docker for a long time. It does depend on the types of apps your building though. My use case fits it perfectly.

Docker is surely not for everyone. If you don't get why you want to use Docker, chances are you don't need it.

Differently people use Docker differently, I see Docker could be useful for several purposes:

1) scalability
This is the main advertisement point for Docker I guess. Well it enables your stateless application to scale automatically.

2) being able to deploy without caring about dependencies
There are times we want to deploy something into customers' servers. We have no control over their machines' environments. But with Docker, the only thing I need to make sure is that they have it installed.

3) Infrastructure as code
All our source code is checked in git and version controlled nicely. When we need to run it again, just clone from git. How nice! But our infrastructure, meaning server formation and service relationship? Not so much. Before using Docker, we basically need to ssh into servers and install dependencies according to the documents. It was not only slow, but unreliable because we all know how documents can lie. With Docker and docker-compose, the infrastructure can be written in code and checked in git. Most importantly, code doesn't lie.

You can surely use other tools to achieve those. I'm not saying Docker is the only way. But I believe it's a good tool to consider when you have those needs.

Before using Docker, we basically need to ssh into servers and install dependencies according to the documents.

There's an entire cottage industry of not-docker tools for that. Not to take anything from docker but the options aren't 'docker' and 'the pre-docker dark ages where you ssh'ed into servers one by one and carefully typed commands from a faded printout'.

On 2, perhaps I misunderstood how docker works but don’t you need to deploy a docker image that is binary compatible with the type and version of the host OS? Then don’t you just move the problem? (Two customers with different types of OS require you to recreate the docker images). And what if these customers need to upgrade their version of linux, do they need to contact all their software vendors to reissue new docker images?

I never understood how creating the tight dependancy between the host and the docker image is not a problem.

Well, you need to get a binary compatible docker image for the hardware. But frankly almost everyone uses x86-64 now, so this is not usually a problem. If these customer upgrade their version of Linux, the new Linux version usually comes with Docker, and you don't need to update the image.

I'm a front end dev, and at my latest contract I wanted to help out our back end developers with some .NET stuff. Can you imagine trying to setup a full .NET stack on a Macbook a few years ago? This time what I had to do was:

You don't need docker to run a single program on a single computer... but the moment you either want to run multiple copies, or distribute it to multiple computers, particularly if they're not your computers, then being able to encapsulate stuff and reduce the number of dependencies is a huge benefit.

For example, it lets you run software on your computer without worrying about installing stuff that might affect other software, and without even having any interactions with other software aside from what you allow it.

Old timer to 2 other old timers: Docker does solve a problem, and this is quite close to it... We had a solution that we needed to install on servers for multiple clients. Before Docker, we had to make sure the libraries were correct (on CentOS, Debian, Ubuntu,... you name it), which was a nightmare. We were literally debugging on customers' installations to see why it broke there.

Docker solves this nicely. Apart from the kernel you take all dependencies with you.

In this case, it provides an easy installation of all the dependencies in a single command, while still allowing for customizations, and all in a standard way. Not to mention that certificates and similar are all solved.

Forget about hype and just try Docker. It's a tool like any other. It can be abused, sure, but it's useful too and it's here to stay.

EDIT: difference between Java and Docker is that containers depend on kernel capabilities, not on some 3rd party JAR maintainers. In my experience Docker simply works unless you are doing something really weird.

I've been using Docker for lots of things. To be honest, the dependency part has been helpful but still weak: we inevitably end up finding that our containers are built from things that go stale (apt-get sources, pip, broken URLs, stale SSL certs, etc). I guess there is a skill to doing it well so that this doesn't happen, but there you are - it didn't really solve the problem it's now an extra skill we need to have to solve a problem we had already mostly solved.

What has really sold me is more the orchestration side, with docker-compose allowing us to easily launch a whole fleet of independent services. I assume swarm and kubernetes etc., make this even better. It's really this that lets us decompose our software into finer grained independent components which has a number of other benefits. I don't think we could manage it otherwise.

You still need to do a frequent build to detect broken dependencies, but at least setting up a daily docker build in CI is so much simpler and faster (and cross-platform!) than doing this with vagrant or live systems. At least you'll catch problems a lot sooner.

If you want to protect yourself from breaking dependencies, mirror them locally, and use a network-isolated docker build to verify you've actually got everything you need.

When thinking about it that way, it's the pre-docker world that seems unnecessarily dangerous -- Why would you just let some random process run on your system with all it's dependencies and possibly cause havoc without containerizing it?

I agree -- I'm personally super excited to try my hand at using LXD[0] on Ubuntu, in particular I want to see just how awesome/well-isolated system images are. If everything I'm reading is right, lxd/lxc system containers are going to make running VMs seem outdated.

Unfortunately it's a little bit annoying because the lxd docs don't make it easy to find what they define as a "system image" -- do you have some links to more more good layouts? I actually found a good talk from DebConf '17[1].

containers keep all that junk away from my system.
how many apps need node.js or ruby and stuff just to preprocess their css? ever managed some legacy php app? instead of putting that junk directly on my machine i just put it in a container. dont need it anymore? cool - docker rmi that shit

Yeah well maybe but some docker config file imply that you know how to configure the postgress, redis, node package manager, nginx and the SSL chain that live in the container by the means of a yaml config file that is in the end another custom layer of abstraction over the original config files. Of course some docker based projects are better than others.

Ye I don't use it for scale. It's great just to have a declarative file which is easy to read and understand. I guess I use it like people use Puppet and Ansible, I just find it better for my use case. And I don't always have to rely on what's in the package manager for my OS. So I get a nice stable core OS, and up to date apps.

docker is basically a reproducible lxc/lxd, but you can get very close to actually running lxc-like machines with docker

for me, docker actually solves the problem that lxc/openvz were trying to solve in a better way. we already had the data on separate volumes, and docker is a much nicer interface than bootstrapping and then updating a chroot for lxc