Artifactory and OpenFaaS – Containers Everywhere!

By Leon Stigter

| February 19, 2019

SHARE:

Right now, we live in a world where the Kubernetes website gets more visitors than the Seattle Seahawks website. That probably means that containers, and the orchestration platform on which they run, are one of the most important things. I mean, everything runs as a container doesn’t it?

The other big ticket item for developers is Serverless. Being able to scale to infinity and back again, not paying for resources you’re not using and not worrying about high-availability and fault-tolerance are quite liberating for developers. The “only” thing developers have to worry about is writing their code.

To combine these two ideas sounds like a no-brainer. Have a world-class container orchestration platform take care of the infrastructure and have developers just write their code. Unsurprisingly, you still want to keep the dependencies of the apps you build and the apps themselves somewhere safe. So, to cover all of that, let’s combine OpenFaaS with JFrog Artifactory as the artifact repository.

Setting up OpenFaaS

OpenFaaS started back in 2016, with Alex Ellis who wanted to build an abstraction on top of existing orchestration platforms so we wouldn’t be locked into one vendor or technology. The idea was to focus on optimizing the developer experience and make it easy for them to develop serverless apps on container technology.

The easiest way to deploy OpenFaaS into your K8s cluster is to use a Helm chart. The Helm chart that is available for OpenFaaS sets up the project with the most sensible defaults. One of the settings you likely want to change is called faasIdler.dryRun. By setting that to false you’ll instruct OpenFaaS to automatically scale down functions to zero pods. Scaling down happens when the apps aren’t used for a specified period of time (which is 5 minutes by default).

Build an app

Personally, I love writing Go, so the app will be a pretty straightforward “Hello World” in Go. To get started, you’ll need to download the function templates and make one slight change in the Dockerfile that builds the app. Line 15 of the file ./template/go/Dockerfile, should read like

Using vendoring (like the default template does), or getting the Go modules directly from GitHub, is not really a good idea if you want to have immutable and repeatable builds. A better approach is to get them from Artifactory or GoCenter. Depending on where you want to get them from, you can update the Dockerfile to either of the two below:

The above snippet makes sure that during the build process of your app, the Go client will use the proxy to resolve the modules from.

Now you’re ready to create a new project. The first step is to create the scaffolding using a simple command faas-cli new --lang go hello-openfaas-go. That command will create a new directory called hello-openfaas-go, with a handler.go file in there, and a hello-openfaas-go.yml which is the deployment descriptor. A simple handler, that responds with a friendly “hello”, could look something like

By default, OpenFaaS sends back all the output from the container to the user, but that isn’t always needed. Line 9 and 10 in hello-openfaas-go.yml are updated to make that change.

The actual “magic” connecting OpenFaaS, JFrog Artifactory, and Kubernetes happens on line 8. There you specify where the resulting Docker image should be stored and this is also where Kubernetes will get the image from as it starts the deployment. In this case, the image is stored at

myhost:8081/docker/hello-openfaas-go:1

The URL has a few components, so let’s break it down a bit:

myhost:8081 is the URL of the Artifactory server (if you want to get started with Artifactory as a Docker registry, check out the getting started guide);

Without changing your workflow you can use Artifactory as the repository for your OpenFaaS deployments!

What’s next

If you want to test drive all the features of JFrog Artifactory (and a lot more), you can sign up for a test drive on our demo environment. For questions or comments, feel free to leave a message here or on Twitter!