Only votes cast on social media via this page will be counted towards the final judging for global winners.

We apologize for some earlier confusion regarding the social media votes – we are tracking each vote cast on that page to decide the global winner.

To ensure complete fairness in this competition, the voting committee reserves the right to reassess the quality of the winning submissions, regardless of the category for which they were submitted. Stars on GitHub will also count towards the final selection of winners. The voting committee will award bonus points for their favorite submissions in addition to the social media voting from the Docker community.

Below is the list of Docker Global Hack Day #3 Local Edition winners:

Could be done efficiently, in lesser time and to be able to analyze more data in that time being able to run multiple agents (within multiple containers) on multiple clients and being able to control the number of containers launched per client.

Emerald CI want to pursue a Docker Compose driven development. You test your software on the CI server just like you test locally, it drastically reduces the risk of breaking the build due to differences of the environments. It uses the same docker-compose.yml specification as Docker Compose, therefore all you need is a docker-compose.yml and a minimal config in your repo.

In some cases, CI/CD-based Docker deployments can be slower than non-containerised deployments. One reason for this is the limitation with container links in a stack requiring for multiple containers to be restarted even if only one of them needs to change. This can cause a domino effect that can only be counteracted by side-by-side deployments (aka blue-green) or incremental deployment. For some technologies, a volume can be used so that new binary versions are replaced in a way that the container dynamically rediscovers them. However, when it comes to shipping these containers to controlled environments we may not actually want a volume.

elope allows you to snapshot packages for incremental deployment to live containers. During deployment, elope will also generate a new image which can be used at a later time for full deployment. There is the ability to perform a diff in order to see how the live container has been patched over time. elope is Docker deployments without the ceremony

The goal of this project is to define a set of image label metadata and launcher tooling that understands said metadata to provide for smooth experience running containerized applications with tight integration with the host operating system.

Checkpoint & Restore is still a feature which is not generically available to container users. Certain understanding about how it works is needed and it’s most likely that users get errors when trying to perform CR due to some restrictions or differences between the source and the target host. The purpose of the project is to create an external command line tool that can be either used with docker or runC which helps on the task to live migrate containers between different hosts by performing pre-migration validations and allowing to auto-discover suitable target hosts.

Dockpot is a high interaction ssh-honeypot based on docker This solved the problems of the low interaction honeypots as the attacker believes he is in areal machine and you can make the docker container anything you want to fool the attacker.

This is DockerGVL’s submission for Global Hack Day 3. We decided that the normal progress bars that the user sees when pulling images, extracting them, etc. are far too boring. As such, we overhauled them to present a random “Docker Animal” for each progress bar that will swim across the sea that is your terminal session as images are pulled down. The goal was to represent all of the mascots of Docker (including Gordon the turtle), so the animals can be one of: [🐢, 🐙, 🐟, 🐳]. Example:

This is an UI application to monitor Docker from any given host. We’ll build this application in order to have a way to monitor any action from variuos hosts and also control them from a clean UI. We’ll be capable of run, stop and kill containers. And also We’ll check the status of all our containers. This way non-technical people can get intimate with docker. Next steps: Let the user add hosts to control. Add graphs of the container status. Add search from Registry and let the user download and start from it. Add Stop and Start button to container view.

Starting from your github repository, t3kpTun trys to generate Dockerfile describing how to package your source code application. We define the input form in which you could figure out various parameters for the application including webserver (Apache or Tomcat), dependencies, TCP ports, linked services, etc.and then, t3kpTun helps to deploy automatically these bundle of your audited files and source code into a Swarm cluster under your AWS account.

This is a mini monitor tool for monitor custom metric from container, and should be start inside container when spawned. It will get custom metric for alarm via pushbullet and send data to influxdb for graphing. We can write your own module to get custom metric from container, and this tool will help us monitor base on pre-define metric threshold, and push data for graphing.

Our goal is simple, we want to use ideas of docker to make desktop experience better. Search trusted applications from AppIt repositories. Tight Dropbox integrations to retain app customizations, even if you are on different hosts. Usability: Click on app icons like you used to, but now they will run on docker.

Swarm and Machine are individually very powerful projects, but when combined, they can work wonders. The project attempts to create a self-scaling cluster using Swarm and Machine. Currently, the cluster needs to be manually set up and configured. Whenever the swarm capacity is full, a new docker host needs to be attached to the cluster. This is something which can be intelligently automated. Whenever Swarm runs out of resources, it invokes Machine to provision new docker hosts. Machine and Swarm are individually very powerful tools, but when linked together, they can do much more powerful things. Here, to setup an AWS Docker Auto Scaling cluster, all you need is swarm, machine and Amazon API Keys.

Till now everyone apply, the optimization at image level, but here we have planned to build a tool which work as a daemon process as your image starts building. Basically, other things removing redundancy, dead code, other irrelevant contains will be removed. So, your memory use is efficient.

With docker-record we want to address the process of getting your infrastructure up-and-running within your container and then having to painfully reproduce what you did to create a Dockerfile. We provide a semi-automated way that takes us from Docker container to the reproducible and transparent definition of infrastructure in a Dockerfile.

Every time something is wrong with any container and if you need too look inside it lacks use-full tools like, ping, ip, traceroute, strace, ltrace, ldd – almost anything. Now you can do it easily and in a reproducible/script friendly way.

This is a new, fun way to learn to use Docker without needing a server in the cloud or installing Docker Toolbox on your computer. All you need is a modern web browser! Our highly advanced Professional Online Wow Learning software (Professor Owl) will guide you through setting up your first container, managing and troubleshooting. You’ll also learn how to:

Creating small containers requires a lot of voodoo magic and it can be pretty painful. You shouldn’t have to throw away your tools and your workflow to have skinny containers. Using Docker should be easy. docker-slim is a magic diet pill for your containers 🙂 It will use static and dynamic analysis to create a skinny container for your app. Right now it only does simple dynamic analysis 🙂

This is a Docker token registry authentication service which supports token authentication introduced in Registry V2. For storing user data it uses vault backend, but future plans are to extend it to more backends. Benefits of using the new token authentication introduced in registry v2 is a fine-grained access control to your private registry. That’s specially important in bigger teams where you share your registry with different projects.

We already have great tools to help us dealing with library dependencies, such as Bundler, NPM, and Cargo. However, in the world of microservices, we still have to manage our own microservice infrastructure. Creating a scalable microservice infrastructure is a hard task and need a lot of experiences. With Spacer, you simply write down the service you need, the version you want. Spacer will take care of all the work of building, monitering, and scaling for you. For example, we need a production-ready open-source spam-filter service. we can write “poga/spam-fighter” in a file named Spacerfile. Spacer will pull the correct service from github and deploy them to development environment or production-ready IaaS. Now we can scale this spam-filter service together and everyone can benefit from it. Thanks to Docker.

Chorus is a simple webservice, that allows you to easily setup a build farm for building and running docker files. A simple CLI interface allows you to manage a large fleet of machines, and a large repo of docker files quite easily. Easily distribute, build, run and stop your containers as you see fit, and embed docker run command line options right into your dockerfiles.

Prizes for Global Edition Winners

For the global competition, we will award prizes to the top three teams in each category.

First Place:

Each member of the three Global Docker Hack Day winning teams (one for each category) will receive a complimentary pass to attend DockerCon Europe, accommodations during the conference and a lightning talk at the conference to present their winning hack!

Second Place:

Each member of the three second place Global Docker Hack Day teams (one for each category) will receive a limited edition Docker hoodie.