**TL;DR**: Support for both ARM and Debian with ROS is now reflected in the Official DockerHub library! :whale:

Hello everyone!

As you might have noticed, DockerHub is beginning to support additional architectures other than amd64 [1]. So I've expanded upon our dockerfile maintenance infrastructure for the official ROS images to enable arm support.

Additionally while refactoring, support for multiple operating systems, i.e. debian based ROS images, has also been enabled, while also extending to supported arm architectures. To see the listing of supported suites, distros and architectures for the official DockerHub library, you can view the manifest for ROS here [2]:

### Notes:
* New tags have been added to specify the operating system suite via appended suffix
* E.g. `kinetic-ros-base-xenial`, `kinetic-ros-base-jessie`
* There are no changes to the original set of tags, as they still point to the same suite
* E.g. `kinetic` <=> `kinetic-ros-base` <=> `kinetic-ros-base-xenial`
* Additionally true for `amd64` tagged images hosted from osrf/ros automated repo
* **Presently**, the multi-architecture ROS images are hosted under separate docker hub organizations
* E.g. `docker pull arm64v8/ros` OR `docker pull arm32v7/ros:indigo`
* You may reference `<arch>/ros:<tag>` to specifically pull a given architecture
* OR try out the ***temporary*** manifest enabled test rolling repo: `docker pull trollin/ros`
* **Forthcoming**, the official registry will internally negotiate what arch is pulled via the manifest
* E.g. if docker-engine host is `arm64v8`, `docker pull ros` should pull an `arm64v8` image
* There is some build scaffolding you can follow for multi-architecture image builds for ROS
* E.g. [arm32v7/job/ros](https://doi-janky.infosiftr.net/job/multiarch/job/arm32v7/job/ros/), [arm64v8/job/ros](https://doi-janky.infosiftr.net/job/multiarch/job/arm64v8/job/ros/)

Of course, if you'd like to play around with any of the arm images, but don't have raspberry pie or other arm based platform laying around, you can easily emulate via qemu-user and binfmt-support. By mounting in the necessary qemu-user static binaries into the container, and installing the necessary binfmt-support kernel module to the host, you can run commands within the arm environment on your `amd64` workstation.

E.g. a small script, such as in `cross-docker` for example [4], can be used like so:

### P.S.
For `arm32v7`, there is a blocking issue upstream with cloud image used in docker hub. If you would like to expedite `arm32v7` support for ROS docker images, you may make your concerns know and follow the bug report:
https://bugs.launchpad.net/cloud-images/+bug/1711735

Although some `i386` binaries are supplied by the ROS buildfarm, I've deliberately omitted it for now, given:
1. `i386` binaries for docker-engine are not officially shipped or supported by Docker
2. Current traffic for `i386` ROS packages is below that for arm

Very nice!
We've been successfully using ARM docker images for ROS in our CI setup for a over a year now and it really makes things a lot easier.
We also included the static qemu binaries in our ARM images to make it easier to run them on our amd64 machines for testing. Is there any plan to also include qemu in the official images?

[quote="ruffsl, post:1, topic:2467"]
By mounting in the necessary qemu-user static binaries into the container, and installing the necessary binfmt-support kernel module to the host
[/quote]

[quote="flixr, post:2, topic:2467"]
We also included the static qemu binaries in our ARM images to make it easier to run them on our amd64 machines for testing. Is there any plan to also include qemu in the official images?
[/quote]

I would vote against this. Instruction translation is built in for Docker CE in (recent versions of) Linux, Mac, and Windows 10 Pro. Rather than include them by default, and push the additional bloat penalty onto users who might not need it, it'd be more important to me to keep the images small without me having to remember to remove these binaries before deployment.

[quote="flixr, post:2, topic:2467"]
Is there any plan to also include qemu in the official images?
[/quote]

Hmm, a convenient idea at first, but just like the Official Ubuntu or Debian images that serve as the base images for ROS, our ROS images similarly serve as a primary starting platform for many application, be it continuous integration or target deployment, etc. I would concur with @computermouth that including qemu into the official ROS images may be too much of a niche use case to justify the larger base image size that would require additional bandwidth when shipping and storage from resource constrained embedded targets. Additionally, the current multi-arch generator setup is such that the dockerfiles themselves are designed to be architecture agnostic, keeping it simple to add support for future platforms as they emerge. Necessitating platform specific alterations would complicate mainece a bit.

I didn't yet reference this given some issues related to this capability on ubuntu and debian [1], and so instead opted for giving a legacy example of mounting the files I know works currently. However, I suspect once some additional issues [2,3] (thanks for starting those by the way, @computermouth) have been patched upstream, then mounting or baking qemu files inside the image should no longer be necessary. Testing with my latest apt sources still failes, but please report back @computermouth when those patches make into debian and ubuntu release.

However, it does work out of the box with the latest Fedora. So I assume it's something coming down the pipe in the qemu dists. And any decisions in the design of your containers should keep that in mind.

Hmm. I like the current working solution you have written on your repo more than suggesting folks to mount qemu files (as done with cross-docker [1]). It also simplifies the building dockerfiles from images of other architectures as well. I have updated the original post to reflect this interim solution in anticipation of `--fix-binary` flag option added into binfmt-support for debian.

### Update:
arm32v7 images for kinetic and lunar have now just released. Although the upstream issues with ubuntu's cloud image for trusty remains [1], blocking older ROS distro tags that target 14.04 such as indigo, you can still at least use the latest LTS for ROS on your older arm targets, e.g Raspberry Pi 2.

Additionally, the official docker library now natively support manifest lists [2]! So instead of previously: `docker pull trollin/ros`, you should now simply be able to: `docker pull ros` to download the same image architecture as is your docker host system. Although should you need to pull a foreign architecture, you can still do so by specifying as such the repo path: `docker pull arm32v7/ros`.

Additionally, as an updated reference for the Docker Multi-arch article from IBM that I linked to in the origin post, here is a more recent talk on the subject by Phil Estes (IBM Cloud) and Michael Friis (Docker, Inc) from DockerCon EU 2017 [2]:

> In this talk, Phil and Michael will talk about how Docker was extended from x86 Linux to Windows, ARM and IBMs z Systems mainframe and Power platforms. They will cover the work and architecture that makes it possible to run Docker on different CPU architectures and operating systems; How porting Docker to a new OS is different from porting it to new hardware; What it means for a Docker image to be multi-arch (and how are multi-arch images built and maintained); How does Docker correctly deploy and schedule apps on heterogeneous swarms. Phil and Michael will also demo some of the new features that let Docker Enterprise Edition manage swarms with both x86 Linux and Windows nodes as well as mainframes.

I was wondering what the update policy of these docker images is?
Also is there any way to quickly show from which distro sync they were built?

Especially asking this since the last sync to indigo and kinetic accidentally included an ABI breakage [1] in the nodelet package and if you build packages on your CI with the current docker images, they will segfault if you run them on your own up-to-date machine.

It would be nice if we could
a) somehow include the distro sync (date?) in the image (maybe as label or so?)
b) automatically build new images when there was a sync

Right now the Docker images are updated manually at @ruffsl discretion.

He's been working on automating the process, right now all the images can be updated using some metadata but it is not hooked up to the buildfarm yet. We're planning on having a job on the farm triggered automatically after every sync to update these images, that job will also be ran periodically to ensure there's no unexpected diff in the Dockerfile and that they still build.

Providing the date of the sync may be a bit more tricky as the docker images get rebuilt everytime the base images change so it would not reflect the actual content of the image.

[quote="flixr, post:11, topic:2467"]
Ok, let me know if I can help with anything there
[/quote]

Regarding the update of the docker images, last build was 24 days ago apparently, so they should still have the previous version for indigo (kinetic and lunar received the nodelet update earlier so they have the new version already (https://github.com/ros/rosdistro/pull/16248, https://github.com/ros/rosdistro/pull/15631).
@ruffsl do we have a way to trigger rebuilds for the images in the official docker library ? or is our only way to submit an updated set of commit hashes?

[quote="flixr, post:9, topic:2467"]
Especially asking this since the last sync to indigo and kinetic accidentally included an ABI breakage
[/quote]

In regards to controlling how ROS syncs are released into the building of docker images, the safeguard this far has been pinning the installed ROS packages by version number. However, from the looks of things, with the ABI brake from July, but the last version bump in the indigo dockerfile being from back in May [1], seems that didn't stop it due to implicit dependency changes in the sync?

As @marguedas mentioned, the triggers for rebuilding the official images are either merged PRs to the official library manifest (pertinent to the image:tag desired), or a rebuild of a parent base image, like the official debian or ubuntu images.

[quote="marguedas, post:12, topic:2467"]
@ruffsl do we have a way to trigger rebuilds for the images in the official docker library ?
[/quote]

We can also ask the library maintainers directly to simply manually trigger a rebuild of the images, say by way ticketing a request on the library github repo. I've done so before a number of times.

@flixr, I think you may find it useful to know that past images of Official Library images are archived on the docker registry and can be retrieved. As for longevity of such archives, I don't yet know what it is (haven't checked), but they seem to go back quite a ways in time. This can be done by pulling the image by its digest (immutable identifier) [2]:

```
docker pull ros@sha256:<older_ros_sha256_here>
````

If you have an older image laying around, you can use the docker CLI to retrieve the digest (I haven't found a good way to retrieve the archived list of available digests for a given repo from the docker registry as of yet, but would like to know [3][4]). If your CI is sensitive to ROS syncs, pinning the digest of the image in your dockerfiles may be a way to mitigate unplanned disruptions. Just be aware that the FROM images would then be static, and would not receive the latest security updates from upstream like generally tracked tags.

In the dockerfile for the official images, only the target application focused packages are pinned. There are sometimes other supporting packages installed, but are not necessarily pinned, e.g. `gnupg2`.

In the dockerfile for the official images, only the target application focused packages are pinned. There are sometimes other supporting packages installed, but are not necessarily pinned, e.g. `gnupg2`.
[/quote]

I was actually thinking of whether the metapackages themselves do any version pinning (as in: in their `Depends` fields). But that doesn't appear so. The current setup appears to work because the repositories don't retain any old packages.