What is GCE?

GCE allows you to run virtual machines on thousands of virtual CPUs that is designed to be fast and reliable for many types of workloads. GCE offers a number of different capabilities including block storage with varying levels of performance as well as networking that lets you scale and keep your applications connected. Read more here .

Getting Started

Flocker does a great job of orchestrating data volumes around a cluster of machines and automatically moving those volumes between nodes when your containers move.

GCE provides Flocker with the machines and persistent disks that it can manage automatically for you and your containers running on the machines.

Combining GCE infrastructure with a volume manager like Flocker gives you the ultimate flexibility for your persistent containerized workloads in a microservices environment.

The Flocker driver for GCE has the following features:

Support for account authentication (Both via VM and from authentication credentials).

Verified testing on large clusters.

Support for Flocker profiles (bronze, silver, gold) silver and gold are persistent disk on SSD, bronze is persistent disk on spinning disk.

You can read more about our integration with GCE and how to use it by visiting the GCE configuration documentation section of our docs. Feel free to reach out on IRC or send an email to support@clusterhq.com.

Deploying Flocker On GCE

Here is an example demonstration of deploying Flocker 1.11 with Ansible onto GCE with the new GCE driver with Docker. Feel free to watch the following recording (no audio) if you want to see an example of what it is like to get started using Flocker on GCE or try yourself in the below step-by-step walk-through!

Walkthrough

The first thing you will need to do is create a GCE account . At the time of writing this you can receive a $300 credit for 60 days on GCE.

Second, create a workspace on your local machine.

$ mkdir ~/gce-demo

Pull down the demo repository from GitHub.

$ git clone https://github.com/ClusterHQ/gce-ansible-demo

Install the gcloud command line tool. You can visit here for more on installing and downloading.

You will need Python 2.7 and virtualenv installed ( pip install virtualenv && pip install virtualenvwrapper ) as well as flockerctl to interact with your cluster.

If you have a local Docker daemon, you can install flockerctl with the following command. You can also install directly to Mac OSX if you would like.

$ curl -sSL https://get.flocker.io/ | sh
$ $:-> flockerctl --help
Usage: flockerctl [options]
Options:
--cluster-yml= Location of cluster.yml file (makes other options
unnecessary) [default: ./cluster.yml]
--certs-path= Path to certificates folder [default: .]
--user= Name of user for which .key and .crt files exist
[default: user]
--cluster-crt= Name of cluster cert file [default: cluster.crt]
--control-service= Hostname or IP of control service
--control-port= Port for control service REST API [default: 4523]
--version Display Twisted version and exit.
--help Display this help and exit.
Commands:
create create a flocker dataset
destroy mark a dataset to be deleted
list list flocker datasets
list-nodes show list of nodes in the cluster
ls list flocker datasets
move move a dataset from one node to another
status show list of nodes in the cluster
version show version information

Next, enter the directory of the repository pulled from Github and create a virtual environment.

$ gcloud auth login
# The name of the GCP project in which to bring up the instances.
$ export PROJECT=<gcp-project-for-instances>
# The name of the zone in which to bring up the instances.
$ export ZONE=<gcp-zone-for-instances>
# A tag to add to the names of each of the instances.
# Must be all lowercase letters or dashes.
# This is used so you can identify the instances used in this tutorial.
$ export TAG=<my-gce-test-string>
# The number of nodes to put in the cluster you are bringing up.
$ export CLUSTER_SIZE=<number-of-nodes>
# Double check all environment variables are set correctly.
$ for instance in $(seq -f instance-${TAG}-%g 1 $CLUSTER_SIZE); do echo "Will create: $PROJECT/$ZONE/$instance"; done

Note, in the gcloud copmute instances create command, the --scopes https://www.googleapis.com/auth/compute flag is what gives our VMs permissions to create and delete volumes so we can skip adding specific credentials to the agent.yml .

$ gcloud compute config-ssh --project $PROJECT
WARNING: The private SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):

There are many ways to install Flocker on a cluster of nodes, for the sake of this tutorial we are using Cluster HQ’s Ansible Galaxy role. If you already use Ansible Galaxy, this provides a nice way to install Flocker in your existing system. If you are not interested in using the Ansible role to install Flocker, you can read our installation docs on how to install Flocker and skip down to the steps after Ansible. The Ansible galaxy role simply automates some of the steps.

Install the requirements to set up a Flocker cluster using Ansible. This involves pip installing Flocker to get flocker-ca and ansible-galaxy , as well as getting the roles from Ansible galaxy to install Docker and Flocker on the nodes.

Here we go, install the needed tools from within your virtual environment.

Note: You can also install flocker-ca using another technique here instead of using pip to install the .whl .

Note: this is the exact agent.yml transfered to our VMs. Its no trick that we are not adding credentials to this dataset portion of the YAML becuase we used --scopes https://www.googleapis.com/auth/compute during our compute creation so we don’t need to.

Note: if you see errors, you can try and re-run the Ansible playbook. If there are errors specifically around openssl or cryptography and you are on Mac OSX you will likely need to add LDFLAGS="-L/usr/local/opt/openssl/lib" as mentioned before when you pip install Flocker.

Next, you should be able to get the status of your Flocker cluster using flockerctl .