docker-s3fs

<a name="docker-engine-after-1.10" ></a> Docker engine after 1.10

Docker engine 1.10 added a new feature which allows containers to share the host mount namespace. This feature makes it possible to mount a s3fs container file system to a host file system through a shared mount, providing a persistent network storage with S3 backend.

Prerequsites

Docker engine 1.10.x

If the docker service is managed by systemd, you need to remove MountFlags=slave. See issue. Example to fix this on CoreOS:

It is important to use the -f flag to keep the s3fs container running in foreground.

Start the unit:

# systemctl start s3fs.service

Now you should be able to see file system under /mnt/mydata on host. Changes you make there will be reflected on the S3 bucket, and shared by other hosts using the system s3fs.service unit.

Note that, if you previously created the files in the S3 bucket with other tools such as s3cmd, awscli, the s3fs file system won't be able to get file ownership and mode correctly. You will see directories listed with permissions like "d------". To fix this, you can correct the permissions under /mnt/mydata on host. s3fs will re-upload s3fs specific z-amz-metadata-* headers.

<a name="docker-engine-before-1.10" ></a> Docker engine before 1.10

Before Docker version 1.10, s3fs mounted volumes (FUSE-based file system) in the container are not visiable from docker host through -v <hostvol>:<s3fsvol> option, nor from other containsers through --volumes-from <containername>. However, you can still copy data out and make it available on the docker hosts and other containers.

The following examples show how to start s3fs container with EC2 IAM role-based credential or with an IAM user that has permission to access your AWS s3 bucket.

Note: You should not include s3:// in the bucket name, otherwise, you get Transport endpoint is not connected error.