Configuring the API

The openapi.yaml file provides Open API Initiative IDL that describes the API
management policies such as authentication and API key restrictions for the API
service application.

Each API service is configured by its openapi.yaml, which is deployed using
the gcloud command-line tool to Google Service Management.
This component manages the API service configurations indexed by a service name
(host in the openapi.yaml) and a config ID.

Deploying to Kubernetes

You deploy an API service by deploying the Extensible Service Proxy as a Docker
container in the same Kubernetes Pod
as the application container.

Notice that the set of pods running the proxy and the application are grouped
under a Kubernetes service
using a label selector, such as app: my-api. The Kubernetes service specifies
the access policy to load balance the client requests to the proxy port.

Now create a Kubernetes Configmap with your custom nginx.conf using kubectl:

kubectl create configmap nginx-config --from-file=nginx.conf

Edit the Kubernetes configuration files such as esp_echo_custom_config_gke.yaml replacing
SERVICE_NAME and SERVICE_CONFIG_ID shown in the snippet below with the values
returned when you deployed the API.

Note: This example is for deployments on Container Engine. If you are not using
Container Engine, see Endpoints for Kubernetes
for instructions on how to create your service account credentials and mount it
as Kubernetes Volumes.

Finally, start the service with the updated Kubernetes configuration file using
kubectl.

kubectl create -f esp_echo_custom_config_gke.yaml

That's it! You have deployed Cloud Endpoints on Kubernetes.

Architecture

The following diagram shows the overall architecture where Extensible Service
Proxy runs as a side-car container in front of the API service application
container, with my-api API hosted at my-api.com and backed by a
Kubernetes Service.