Utilizing Caches When Building Go Projects On Google Cloud Build

Moore’s law has ensured we experience exponential rise in computing power. There’s no pressure to optimize our software to use fewer CPU cycles, less memory, and eventually, less electricity. However, the explosive growth that computing has gone through, and the challenges we face as humanity has put us in a tough spot. We need to consume less, use our resources smarter, and consider the impact of our actions on future generations. With this in mind, we can take small, simple steps to ensure our actions have less of a negative impact.

Consider using Go for your next (cloud oriented) project. Why? Go has a great mixture of accessibility, minimalism and practicality. With Go and Docker it’s quick and easy to build containerized software. Compilation time is low and with the new module functionality dependencies are much easier and straightforward to manage.

Let’s have a look at how we can manage our dependencies and re-use as much as possible when implementing CI with Go, Google Cloud Build and Docker, thus saving precious Google Cloud time. We will create a builder image as a base layer with our project dependencies with no additional tooling or extra Dockerfiles. We’ll use standard multi stage Dockerfile definition. We will use docker compose to bootstrap our dependencies and run tests:

1

2

3

4

5

6

7

8

9

version:"3"

services:

test:

image:"gcr.io/${PROJECT_ID}/${REPO_NAME}:builder"

build:

context:.

dockerfile:Dockerfile.builder

entrypoint:>

gotest./...

We define our test service, which has the image tag in the form
gcr.io/${PROJECT_ID}/${REPO_NAME}:builder

$PROJECT_ID and
$REPO_NAME variables will be supplied by Cloud Build tool.

If we run tests locally, we can create .env file in the project root directory to create those variables when running docker-compose. We found that using docker-compose to bootstrap your dependencies and running tests is the best solution for Cloud Build, it’s supported nicely and improves reproducibility. Now let’s have a look at the Dockerfile:

Notice
--target flag in the first step. Docker allows us build only specific build step, ignoring everything after it. This is exactly what we need to be able to use cache image in subsequent builds.

In the first step, if our dependencies change, the image will be rebuilt, if the cache image doesn’t exist it will be built from scratch. The second and the last steps will test and build / deploy in case of success the final image.

Re-using dependencies layer in Cloud Build steps sped up our built times by a factor of x1.5, not the greatest achievement, I must admit. However, the more your project grows, more and more minutes end up being shaved off from build times. Additionally, this approach establishes the efficiency mindset. Let’s not waste our build resources and consciously apply scarcity mindset to the CI/CD process. Change happens in small increments, bit by bit.