Setup

Google Cloud Shell is a browser-based terminal that Google provides to interact with your GCP resources. It is backed by a free Compute Engine instance that comes with many useful tools already installed, including everything required to run this demo.

Click the button below to open the demo instructions in your Cloud Shell:

Change into the demo directory.

cd security-intro

Create a GKE Cluster

From Cloud Shell, enable the Kubernetes Engine API.

gcloud services enable container.googleapis.com

Create a GKE cluster using Istio on GKE. This add-on will provision
your GKE cluster with Istio.

This Istio installation uses the default MTLS_PERMISSIVEmesh-wide security
option.
This means that all services in the cluster will send unencrypted traffic by default. In PERMISSIVE
mode, you can still enforce strict mutual TLS for individual services, which we'll
explore below.

Once the cluster is provisioned, check that Istio is ready by ensuring that all pods are Running or Completed.

🔎 Each pod has 2 containers, because each pod now has the injected Istio
sidecar proxy. (the cartservice and redis pods live in the cart namespace,
but will not be used for the purposes of this demo.)

Now we're ready to enforce security policies for this application.

Authentication

Authentication refers to identity: who is this service? who is this end-user? and can I trust that they are who they say they are?

One benefit of using Istio that it provides uniformity for both service-to-service and end
user-to-service authentication. Istio abstracts away authentication from
your application code, by tunneling all service-to-service communication through the Envoy
sidecar proxies. And by using a centralizedPublic-Key
Infrastructure, Istio provides consistency to make sure authentication is set up
properly across your mesh. Further, Istio allows you to adopt mTLS on a per-service basis,
or easily toggle end-to-end encryption for your entire
mesh. Let's see how.

Enable mTLS for the frontend service

Right now, the cluster is in PERMISSIVE mTLS mode, meaning all service-to-service ("east
west") mesh traffic is unencrypted by default. First, let's toggle mTLS for the frontend
microservice.

For both inbound and outbound requests for the frontend to be encrypted, we need two
Istio resources: a
Policy (require TLS for inbound requests) and a DestinationRule (TLS for the
frontend's outbound requests).

View both these resources in ./istio/mtls-frontend.yaml.

Apply the resources to the cluster:

kubectl apply -f ./istio/mtls-frontend.yaml

Verify that mTLS is enabled for the frontend by trying to reach it from the
istio-proxy container of a different mesh service.

First, try to reach frontend from productcatalogservice with plain HTTP.

🔎 The TLS key and certificate for productcatalogservice comes from Istio's
Citadel component, running centrally. Citadel generates keys and certs for all mesh
services, even if the cluster-wide mTLS is set to PERMISSIVE.

Enable mTLS for the default namespace

Now that we've adopted mTLS for one service, let's enforce mTLS for the entire
default
namespace.
Doing so will automatically encrypt service-to-service traffic for every Hipstershop service
running in the default namespace.

Openistio/mtls-default-ns.yaml. Notice that we're using the same resources
(Policy and DestinationRule) for
namespace-wide mTLS as we did for service-specific mTLS.

Apply the resources:

kubectl apply -f ./istio/mtls-default-ns.yaml

From here, we could enable mTLS globally across the mesh using the gcloud container clusters update command, with the --istio-config=auth=MTLS_STRICT flag. Read more in
the
Istio on GKE documentation.

You can also manually enable mesh-wide mTLS by applying a
MeshPolicy
resource to the cluster.

Overall, we hope this section showed how you can incrementally adopt encrypted service communication using Istio, at your own pace, without any code changes.

Add End-User JWT Authentication

⚠️ We recommend only using JWT authentication alongside mTLS (and not JWT by itself), because plaintext JWTs are not
themselves encrypted, only signed. Forged or intercepted
JWTs could compromise your
service mesh. In this section, we're building on the mutual TLS authentication already configured for the
default namespace.

🔎 This Policy uses Istio's
test JSON Web Key Set (jwksUri), the public key used to verify incoming JWTs.
When we apply this Policy, Istio's Pilot component will pass down this public key to
the frontend's sidecar proxy, which will allow it to accept or deny requests.

Also note that this resource updates the existing frontend-authnPolicy we created in
the last section; this is because Istio only allows one service-matching Policy to exist
at a time.

Apply the updated frontend Policy:

kubectl apply -f ./istio/jwt-frontend.yaml

Set a local TOKEN variable. We'll use this TOKEN on the client-side
to make requests to the frontend.

You should receive a 403- Forbidden error. This is expected, because we just locked down the
frontend service to only whitelisted subjects.

Control access to the frontend

Open the YAML file at ./istio/rbac-frontend.yaml.

🔎 The ServiceRole resource, frontend-viewer, specifies an abstract persona that can make GET and HEAD requests to the frontend.

The ServiceRoleBinding maps the frontend-viewer role to only those subjects that
have a hello:world request header. Also note how instead of specifying an explicit
ServiceAccount that can make requests, we're using Istio's Constraints and Properties
feature. This feature allows us to dynamically select subjects based on abstract
selectors.

Apply the RBAC resources to the cluster:

kubectl apply -f ./istio/rbac-frontend.yaml

Make another request from productcatalogservice to the frontend. This time, pass
the hello:world request header.

🔎 From here, if you wanted to expand authorization to the entire default namespace, you
can apply similar resources. Learn more about that here.

🎉 Nice job! You just configured a fine-grained Istio access control policy for one
service. We hope this section demonstrated how Istio can support specific, service-level
authorization policies using a set of familiar, Kubernetes-based RBAC resources.

Cleanup

To avoid incurring additional costs, delete the GKE cluster created in this demo: