Deploying Functions from a Dockerfile

On This Page

This tutorial guides you through the process of deploying functions whose build process is solely defined in a user-supplied Dockerfile. The tutorial assumes that you followed the source-based deployment tutorial, which provides an introduction to function signatures, configuration, and more.

How is this different from source-based deploys?

A Nuclio function is, at the end of the build process, a container image. This image includes all the components required to run the function, except for the function configuration:

Processor binary

Per-runtime shim layer (for example, a Python application that communicates with the processor on one side and the user’s code on the other)

User code

To be able to generate such an image using Docker, you must provide a Dockerfile. When you provide Nuclio’s build process with source code (be it from a local directory, a local file, or a URL pointing to an archive), Nuclio generates this Dockerfile for you. Prior to version 0.5.0 of Nuclio, this generation process required formatting and templating, and therefore happened in the code. In version 0.5.0, a Dockerfile-based build process was introduced, modifying the previous build recipe into one that can be solely represented in a Dockerfile. This opened up a new build method in which the user either provides a Dockerfile, or even more extremely —builds the function image himself using only docker build.

Dockerfile deployment isn’t better than source-based deployment; it’s just another way for users to create function images. Even prior to version 0.5.0, Nuclio had features that allowed users to inject build-time parameters, like spec.build.commands for running apk, apt-get, pip, and other package providers. However, some users may prefer to handle build themselves, using the tools they know and love.

Note

While the process itself is offered as an alternative, many good things came from this feature. Most notably, prior to version 0.5.0, users were limited to using pre-baked “alpine” or “jessie” base images. Now, source-based and Dockerfile-based builds can provide any base image, as long as this base image contains the runtime environment suitable for the runtime; (for example, to run Python functions, the image must contain a Python interpreter).

Now, create a Dockerfile by following the guidelines in the Go reference.

Note

Future versions of nuctl will automate creating these blueprints through something like nuctl create blueprint --runtime python:3.6, which will create a Dockerfile, a function.yaml file, and an empty Python handler.

FROM nuclio/uhttpc:0.0.1-amd64 —used for providing an open-source health-check related binary (basically, a self contained curl). This is used for the “local” platform. You don’t need this or the HEALTHCHECK platform if you plan on running only on Kubernetes.

FROM alpine:3.6 —the base image on which the final processor image will run.

FROM nuclio/handler-builder-golang-onbuild —this is where it gets interesting: while every runtime needs the processor binary, each runtime must also provide a unique set of artifacts. Interpreter-based runtimes, like Python and NodeJS, simply need to provide the shim layer and the user’s code. However, compiled runtimes (Go, Java, .NET Core) must compile the user’s code into a binary. This is done with a set of ONBUILD directives in the onbuild image. You provide the source, and the base image will do everything that is required to provide you with the artifact at the expected location. In this tutorial, by simply using FROM nuclio/handler-builder-golang-onbuild and providing Go source code, you will build a Go plugin that will reside at /opt/nuclio/handler.so. All you have to do is copy the plugin to the proper location in you final processor image.

It is up to you to customize this Dockerfile, if you so choose (for example, by adding RUN directives that add dependencies), but all provided Dockerfiles are ready to go. Go ahead and build the function; you only need the Dockerfile and helloworld.go: