Services can be deployed to shared clusters in the Graphcool Cloud. When deploying to a shared cluster, there is a free developer plan as well as a convienent and efficient pay-as-you-go pricing model for production applications.

It is possible to have a fully self-contained installation of Graphcool running with Docker on your machine in minutes, to quickly iterate on your implementation without requiring internet access.

Graphcool is a complex system of multiple services working together.
However, we condensed this complexity down to three core services necessary to run Graphcool for an optimal local development experience:

A Development server: Contains all APIs and core business logic.

A Database: A MySQL database containing your data.

A local function runtime environment ("local function-as-a-service"): Allows you to quickly deploy and run your functions locally, without the need for an external FaaS provider.

Deploying the service to a local cluster

Note: If you already have a local .graphcoolrc file for your service that contains a target, graphcool deploy will not prompt you to select a cluster. You can however add the --interactive option to the command to enforce the prompt: graphcool deploy --interactive.

That's it, your service is now deployed to a local Docker container . Consequently, endpoints that are printed in the output of the graphcool deploy command are all targetting localhost:

With docker ps and docker images it is possible for you to inspect Graphcool's Docker setup.

You will see four images in your local docker image repository, where VERSION will be something like 0.8.1:

graphcool/graphcool-dev:<VERSION>: The core apis.

graphcool/localfaas:<VERSION>: The local funtion runtime.

mysql:5.7: The database.

rabbitmq:3-management: A message broker we currently use. (Will be deprecated for local Graphcool soon.)

We use these four images to spin up four containers, which you can see with the aforementioned docker ps command. All those containers are in a custom docker network to leverage docker DNS for service discovery.

By default, the core APIs bind to port 60000, and the local function runtime binds to 60001. If you start more than one local intance of Graphcool, the CLI will search for the next open port, e.g. 60002, to bind to. You can change the binding of the core APIs by setting a PORT env var!

Your data in the database container and function deployments in the localfaas container are persisted using named docker volumes.

docker tail -f CONTAINER_NAME is useful for peeking into the containers and debugging issues, especially the localfaas container prints plenty of debug output for you to see what is going on, separate from the actual function logs that you can still retrieve with the graphcool logs command.