Tag Info

Lets forget the high-level architectural and philosophical arguments for a moment. While there may be some edge cases where multiple functions in a single container may make sense, there are very practical reasons why you may want to consider following "one function per container" as a rule of thumb:
Scaling containers horizontally is much easier if the ...

Having slain a "two processes" container a few days ago, there some pain points for me which caused me to use two container instead of a python script which started two processes:
Docker is good at recognizing crashed containers. It can't do that when the main process looks fine, but some other process died a gruesome death. Sure, you can monitor your ...

The very first thing to know about a container is:
It is, first and foremost, a process.
Once that is understood, one can start to understand how containers compare and contrast with virtual machines. Containers and VMs both share isolation from their hosts. The method of isolation is the critical difference.
Container processes use extensions to the OS ...

I would keep the ECS container instances (I'm talking about the Docker hosts - I don't like AWS terminology here) and the deployment as two separate things.
Get your ECS stack up and running. You can manage it through CloudFormation and Auto-scaling groups, that's fine. Just think of your cluster as a platform where you will deploy to, not something you ...

When people talk about running a database in Docker, they do not mean to store the data in a container; they are talking about having a docker image with the DB software, and mounting the data as a volume (a bind volume, not a container volume).
Volumes are an essential part in Docker, and are not something that is flakey or just tacked on. Docker is not ...

First of all, Rancher actually contains implementations of both Kubernetes and Mesos within itself. However, they did make their own system called Cattle which is heavily based on Docker's Swarm. I'll touch upon this in the section for Rancher.
Secondly, since each one offers similar base features (load balancing, resource isolation, etc) I'll try to go ...

There are a few answers to that:
Something needs to build those immutable images. It is certainly easier to use old-school-style procedural scripting to build something when starting from a known starting state but this can still get very unwieldy over time (e.g. Dockerfiles), especially when you end up wanting a big matrix of different images for things ...

The recommendation comes from the goal and design of the Operating-system-level virtualization
Containers have been designed to isolate a process for others by giving it its own userspace and filesystem.
This is the logical evolution of chroot which was providing an isolated filesystem, the next step was isolating processes from the others to avoid memory ...

The same as any Linux load, consider no more processes in wait state than the number of CPU.
The load on 1, 5 and 15 mins given by uptime should ideally be 1 less than number of cores.
Containers are roughly isolated processes, leaving a core for orchestration avoid congestion.
That doesn't mean only 7 containers on a 8 cores machine, it's a matter of ...

Update: Docker just released support for Kubernetes as scheduler, which changes the situation and makes Kubernetes just an alternative scheduler to Docker Swarm.
TL;DR: DON'T DO IT. Engineers always try to create these dog-pigs. Every unnecessary technology you bring will bring another whole set of faults. If you can pick one, then pick one and be happy ...

The word container refers to a lightweight virtualisation technology available on modern Linux kernels, this technology is very similar to FreeBSD jails.
An older, non-container-able, Linux kernel is able to run processes concurrently. Some attributes of the system are private to process, like the process environment or the process memory: only the process ...

General ways to track why a process in Linux failed are good. One such way is to run a process using strace which will tell you the system calls process did and usually point to the reason for a failure.
You can create a Dockerfile that looks something like this:
FROM original_image
RUN apt-get -y update && apt-get install -y strace
# build with `...

While technologically, containers and virtual machines are very different, there is no apparent difference from the perspective of your software. It seems like the argument in your question is that data is special and will always be a unique snowflake, so your question basically boils down to what to do about it in terms of DevOps, CI and Automation.
This ...

Usually containers refers to something like docker containers which have popularized the name
I quote there from docker definition:
Using containers, everything required to make a piece of software run
is packaged into isolated containers. Unlike VMs, containers do not
bundle a full operating system - only libraries and settings required
to make ...

The onboarding experience should be almost as simple as telling your new developer to just clone the repo and run docker-compose up. Personally I wouldn't bother worrying about IDE integration because people might prefer to use different IDEs.
Every project/application (if you have multiple) should be able to run separately and each project/application ...

Docker is a specific implementation of Linux containers, or if you want to be more precise Docker is a distribution of tools that includes runc which is an implementation of Linux containers. Other implementations include rkt, LXC, LXD, and (I think) Snappy from Ubuntu.

As in most cases, it's not all-or-nothing. The guidance of "one process per container" stems from the idea that containers should serve a distinct purpose. For example, a container should not be both a web application and a Redis server.
There are cases where it makes sense to run multiple processes in a single container, as long as both processes support ...

If one uses a Dockerfile then a colleague could also understand what happened (documentation as code). If one runs a container, enters it, runs commits then it would be hard to understand what packages were installed. Especially after a couple of months.

Does the Docker engine abstract away the OS such that this configuration will run both apps?
No, it does not. Docker uses containerisation as a core technology, which relies on the concept of sharing a kernel between containers. If one Docker image relies on a Windows kernel and another relies on a Linux kernel, you cannot run those two images on the same ...

I can think of only one case where overhead of running workers in separate containers is justified: if your setup uses docker swarm for clustered deployment. This way you will get all of the HA benefits.
Otherwise I don't see a reason to complicate such tasks, especially if they must use strictly the same codebase (which leads me to believe that they share ...

Who said that properties files and environment variables where mutually exclusive?
There is a distinction to be made between "where do I store my app configuration?" And "where does my app source it's configuration? "
The most likely outcome is that everyone probably should just keep doing what they are doing with configuration files as a storage mechanism ...

Azure Container Instances
(ACI) may be a good option as you suggest. These let you run a container directly on Azure, without having to manage a VM, with per-second billing for the time the container is used.
Although one of the demos on that blog mentions Kubernetes, the idea of ACI is that you can create a container through the Azure CLI with az ...

It is not officially supported (as of Minikube 0.25.0, Kubernetes 1.9, January 2018). But there is Beta support for Windows server containers in Kubernetes.
These articles contain more information:
http://blog.kubernetes.io/2018/01/kubernetes-v19-beta-windows-support.html
https://kubernetes.io/docs/getting-started-guides/windows

Alright... you're not going to implement every tool or automation at the same time, as some of them have quite deep impact on your development processes (and I daresay, development culture). Take a step-by-step approach; research each individual tool. Figure out what it actually does, what it is useful for. Play around with them, install them locally, go ...

We are planning to automate this process for our Jenkins EC2 slaves.
Currently we have to manually update the AMI ID every time we build a new AMI, but by taking advantage of the config.xml file where Jenkins store all its configuration, we should be able to automatically update the AMI value in this file and then restart Jenkins to take those changes into ...

https://forums.rancher.com/t/start-order-of-stack-containers/3106/9
We do not support depends_on, and neither does Docker in Swarm mode.
It is not a real solution to the problem anyway and leaves you with
unhandled pointy-edge cases when failures occur and containers are
being replaced. Your services should know how to either wait for their
...

Well, the extra bells and whistles is called process isolation, a container gets its own namespace from the host kernel, that means the program in the container can't try to read kernel memory or eat more RAM than allowed.
It also isolate network stacks, so two process can listen on port 8080 for exemple,
you'll have to handle the routing at host level, ...

Hi and welcome to DevOps SE!
While DevOps is not self-purposed, maybe it is worth to step back for a moment to find out the actual problem before trying out solution.
As you have asked a more or less generic question, I'll give you sort of a "DevOps primer" stick more to methodology rather than give you a list of tools.
How detailed you would tailor the ...

In reply to 'Would Amazon EC2 / Google / Azure be cheaper than having dedicated servers?'
I've done a lot of investigating into this and in every case they are NOT.
80 Linux servers running 24h/365 - annual cost:
Total cost per year
Google £37,515.46
AWS £29,196.53
These costs do NOT include data transfer.
You can buy a dedicated cloud ...