Release notes archive

This page contains a historical archive of all release notes for
Google Kubernetes Engine prior to 2018. To view more recent release notes, see the
release notes.

To get the latest product updates delivered to you, add the URL of this page to
your feed reader.

December 14, 2017

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.

New versions available for upgrades and new clusters

The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:

December 5, 2017

New Features

December 1, 2017

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.

New Features

November 28, 2017

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.

New versions available for upgrades and new clusters

The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:

Other Updates

November 13, 2017

Version updates

Kubernetes Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Kubernetes Engine masters and nodes.

New versions available for upgrades and new clusters

The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:

Kubernetes 1.7.10-gke.0

Kubernetes 1.8.3-gke.0

Other Updates

Kubernetes Engine's kubectl version has been updated from
1.8.2 to 1.8.3.

November 7, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New versions available for upgrades and new clusters

The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:

New Features

Added an option to the gcloud container clusters create command: --enable-basic-auth. This option allows you to create a cluster with basic authorization enabled.

Added options to the gcloud container clusters update command: --enable-basic-auth, --username, and --password. These options allows you to enable or
disable basic authorization and change the username and password for an existing cluster.

October 31, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New versions available for upgrades and new clusters

The following versions are now available for new clusters and opt-in master
and node upgrades according to this week's rollout schedule:

Kubernetes 1.7.9-gke.0

Scheduled auto-upgrades

Clusters running the following Kubernetes versions will be automatically
upgraded as follows, according to the rollout schedule:

Clusters running Kubernetes 1.6.x will be upgraded to 1.6.11-gke.0.

Clusters running Kubernetes 1.7.x will be upgraded to 1.7.8-gke.0.

Clusters running Kubernetes 1.8.x will be upgraded to 1.8.1-gke.1

This upgrade applies to cluster masters and, if
node auto-upgrades
are enabled, all cluster nodes.

New default version for new clusters

Kubernetes version 1.7.8-gke.0 is now the default version for new clusters,
available according to this week's rollout schedule.

New Features

You can now run Container Engine clusters in region
asia-south1 (Mumbai).

Fixes

Clusters using the Container-Optimized
OSnode image version
cos-stable-61 can be affected by Docker daemon crashes and
restarts and become unable to schedule pods.

To mitigate this issue, clusters running Kubernetes versions 1.6.x, 1.7.x,
and 1.8.x are slated to automatically upgrade to versions 1.6.11-gke.0,
1.7.8-gke.0, and 1.8.1-gke.1 respectively. These versions have been remapped
to use the cos-stable-60-9592-90-0 node image.

Automatic upgrades must be enabled for this workaround to
take effect. If your cluster does not have auto-upgrades enabled, you must
manually upgrade your cluster to the appropriate version to employ the
workaround.

Known Issues

Clusters running Kubernetes version 1.7.6 might see inaccurate memory usage
metrics for pods running on the cluster. Clusters are slated to automatically
upgrade to version 1.7.8-gke.0 to mitigate this issue. If node auto-upgrades
are not enabled for your cluster, you can manually upgrade to 1.7.8-gke.0.

October 24, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

You can now edit your cluster's workloads when viewing them with the
Google Cloud Platform Console.

Known Issues

Kubernetes Third-party Resources, previously deprecated, have been removed
in version 1.8. These resources will cease to function on clusters upgrading
to version 1.8.1 or later.

Audit Logging, a beta feature in Kubernetes 1.8, is currently not enabled
on Container Engine.

Horizontal Pod Autoscaling with Custom Metrics, a beta feature in
Kubernetes 1.8, is currently not enabled on Container Engine.

Other Updates

Beta features in the Container Engine API (and gcloud
command-line interface) are now exposed via the new v1beta1API surface. To use beta
features on Container Engine, you must configure the gcloud
command-line interface to use the Beta API surface to run
gcloud beta container commands. See
API organization
for more information.

October 10, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and opt-in
master upgrades for existing clusters, according to this week's rollout schedule:

Other Updates

Clusters running Kubernetes versions 1.7.8 and 1.6.11 have upgraded the
version of Container-Optimized OS
running on cluster nodes from version cos-stable-60-9592-84-0 to
cos-stable-61-9765-66-0. See the release notes for more details.

This upgrade updates the node's Docker version from 1.13
to 17.03. See the Docker
documentation for details on feature deprecations.

October 3, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New versions available for upgrades and new clusters

Kubernetes version 1.8.0-gke.0 is now available for early access partners
and alpha clusters only.
To try out v1.8.0-gke.0, sign up for the early access program.

Scheduled master auto-upgrades

Cluster masters running Kuberenetes versions 1.7.x will be automatically
upgraded to Kubernetes v1.7.6-gke.1 according to this week's rollout schedule.

New Features

You can now rotate your username for basic authorization on existing
clusters, or disable basic authorization by providing an empty username.

Fixes

Kubernetes 1.7.6-gke.1: Fixed a regression in fluentd.

Kubernetes 1.7.6-gke.1: Updated the kube-dns add-on to
patch dnsmasq vulnerabilities announced on October 2. For more
information on the vulnerability, see the associated Kubernetes
Security Announcement.

Known Issues

Kubernetes 1.8.0-gke.0 (early access and alpha clusters only):
Clusters created with a subnetwork with an automatically-generated name that
contains a hash (e.g. "default-38b01f54907a15a7") might encounter issues
where their internal
load balancers fail to sync.

Container Engine clusters can enter a bad state if you convert your
automatically-configured network to a manually-configured one. In this
state, internal
load balancers might fail to sync, and node pool upgrades might
fail.

September 27, 2017

New Features

You can now configure a maintenance
window for your Container Engine clusters. You can use the maintenance
window feature to designate specific spans of time for scheduled maintenance
and upgrades to your master and nodes. Maintenance window is a beta
feature on Container Engine.

The Ubuntu node image is
now generally available for use on your Container Engine cluster nodes.

September 25, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

Scheduled master auto-upgrades

Cluster masters running Kuberenetes versions 1.7.x will be automatically
upgraded to Kubernetes v1.7.5 according to this week's rollout schedule.

Cluster masters running Kuberenetes versions 1.6.x will be automatically
upgraded to Kubernetes v1.6.10 according to this week's rollout schedule.

Fixes

Kubernetes v1.7.5: Fixed an issue with Kubernetes v1.7.0 to v1.7.4
in which controller-manager could become unhealthy and enter
a repair loop.

Kubernetes v1.6.10: Fixed an issue in which a GCP Load Balancer
could enter a persistently bad state if an API call failed while the ingress
controller was starting.

September 18, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New default version for new clusters

Kubernetes v1.7.5
is the default version for new clusters, available according to this week's
rollout schedule below.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and
opt-in master upgrades for existing clusters:

1.7.6

1.6.10

New versions available for node upgrades and downgrades

The following Kubernetes versions are now available for node
upgrades and downgrades:

New Features

Starting in Kubernetes version 1.7.6, the available resources on cluster
nodes have been updated to account for the CPU and memory requirement of
Kubernetes node daemons. See the
Node
documentation in the cluster
architecture overview for more information.

Other Updates

The deprecated container-vm node image type has been removed
from the list of valid Container Engine node images. Existing clusters and
node pools will continue to function, but you can no longer create new
clusters and node pools that run the container-vm node
image.

Clusters that use the deprecated container-vm as a node image
cannot be upgraded to Kubernetes v1.7.6 or later.

September 12, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New versions available for upgrades and new clusters

The following Kubernetes versions are now available for new clusters and
opt-in master upgrades for existing clusters:

1.7.5

1.6.9

1.6.7

Scheduled master auto-upgrades

Cluster masters running Kubernetes versions 1.6.x will be upgraded to
Kubernetes v1.6.9
according to this week's rollout schedule.

New Features

You can now use IP aliases
with an existing subnetwork when creating a cluster. IP aliases are a Beta
feature in Google Kubernetes Engine version 1.7.5.

September 05, 2017

Version updates

Container Engine cluster versions have been updated as detailed in the
following sections. See versioning
and upgrades for a full list of the Kubernetes versions you can run on your
Container Engine masters and nodes.

New default version for new clusters

Kubernetes v1.6.9
is the default version for new clusters, available according to this week's
rollout schedule.

August 28, 2017

Clusters with a master version of v1.6.7 and Node
Auto-Upgrades enabled will have
nodes upgraded to v1.6.7.

Clusters with a master version of v1.7.3 and Node
Auto-Upgrades enabled will have
nodes upgraded to v1.7.3.

Starting at version v1.7.4, when Cloud Monitoring is enabled for a cluster,
container system metrics will start
to be pushed by Heapster to Stackdriver Monitoring API. The metrics remain
free, though Stackdriver Monitoring API quota will be affected.

The COS node image was upgraded from cos-stable-59-9460-73-0 to
cos-stable-60-9592-84-0. Please see the COS image release
notes for details.

The new COS image includes an upgrade of Docker, from v1.11.2 to
v1.13.1. This Docker upgrade contains many stability and performance
fixes. A full list of the Docker features that have been deprecated
between v1.11.2 and v1.13.1 is available on Docker's
website.

Three features in Docker v1.13.1 are disabled by default in the COS
m60 image, but are planned to be enabled in a later node image
release: live-restore, shared PID namespaces and overlay2.

Known issue: Docker v1.13.1 supports
HEALTHCHECK,
which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes
supports more powerful liveness/readiness checks for containers, and
it currently does not surface or consume the HEALTHCHECK status
reported by Docker. We encourage users to disable HEALTHCHECK in
Docker images to reduce unnecessary overhead, especially if
performance degradation is observed after node upgrade.
Note that HEALTHCHECK could be inherited from the base image.

There is a known issue with StatefulSets in 1.7.X that causes StatefulSet pods
to become unavailable in DNS upon upgrade. We are currently recommending that
you not upgrade to 1.7.X if you are using DNS with StatefulSets. A fix is
being prepared. Additional information can be found here:
https://github.com/kubernetes/kubernetes/issues/48327

Known Issues running Docker v1.13:

Docker v1.13.1 supports
HEALTHCHECK,
which was previously ignored by Docker v1.11.2 on COS m59. Kubernetes supports
more powerful liveness/readiness checks for containers, and it currently does
not surface or consume the HEALTHCHECK status reported by Docker. We
encourage users to disable HEALTHCHECK in Docker images to reduce
unnecessary overhead, especially if performance degradation is observed after
node upgrade. Note that HEALTHCHECK could be inherited from the base image.

August 21, 2017

When using IP aliases, you can now represent service CIDR blocks by using a
secondary range instead of a subnetwork. This means you can use IP aliases
without specifying the --create-subnetwork option.

Cluster etcd fragmentation/compaction fixes.

Known Issues upgrading to v1.7.3:

There is a known issue with StatefulSets in 1.7.X regarding annotations, so
we are currently recommending that you not upgrade to 1.7.X if you are using
them. A fix is being prepared. Additional information can be found here:
https://github.com/kubernetes/kubernetes/issues/48327

August 14, 2017

Cluster masters running Kubernetes versions 1.7.X will be upgraded to
v1.7.3
according to the following schedule:

Updated Google Container Engine's kubectl from version 1.7.2 to 1.7.3.

Added --logging-service flag to gcloud beta container clusters update.
This flag controls the enabling and disabling of Stackdriver Logging integration.
Use --logging-service=logging.googleapis.com to enable and --logging-service=none
to disable.

Cloud monitoring can only be enabled in clusters that have monitoring scope
enabled in all node pools.

Known Issues upgrading to v1.6.7:

Kubernetes 1.6.7 includes version 0.9.5 of the GCP Ingress Controller. This version contains a
fix for a bug that caused the controller to incorrectly synchronize GCP URL Maps. Changes to
the ingress resource may not have caused the GCP URL Map to update. Using the fixed controller
will ensure maps reflect the host and path rules. To avoid potential disruption, validate that
all ingress objects contain the desired host or path rules.

August 3, 2017

Users with access to Kubernetes
Secret objects
can no longer view the secrets' values in Google Container Engine UI.
The recommended way to access them is with the kubectl tool.

August 1, 2017

The VM firewall rule (e.g. cluster-<hash>-vms) for non-legacy auto-mode
networks now includes both the primary and reserved VM ranges (10.128/9)
if the primary range lies outside of the reserved range.

You can now use the beta Ubuntu node image with clusters running Kubernetes
version 1.6.4 or higher.

You can now run Container Engine clusters in region europe-west3 (Frankfurt).

GCP Internal Load Balancers created through Kubernetes services (a
Beta feature in 1.7) have an issue that causes health-checks to fail
preventing them from functioning. This will be fixed in a future patch
release.

Services of type=LoadBalancer in clusters that have nodes running
Kubernetes v1.7 may fail GCP Load Balancer health checks. However, the Load
Balancers will continue to forward traffic to backends. This issue will be
fixed in future patch release and may require special upgrade actions.

July 13, 2017

New views available in Google Container Engine UI, allowing cross-cluster
overview and inspection of various Kubernetes Objects. This new UI will be
rolling out in the coming week:

Kubernetes 1.7 is being made available as an optional version for clusters.
Please see the release announcement
for more details on new features.

You can now use HTTP re-encryption through Google Cloud Load Balancing to
allow HTTPS access from the GCP Load Balancer to your service backend. This
feature ensures that your data is fully encrypted in all phases of transit,
even after it enters Google's global network.

Support for all-private IP (RFC-1918) addresses is generally available. These
addresses allow you create clusters and access resources in all-private IP
ranges, and extends your ability to use Container Engine clusters with
existing networks.

Support for external source IP preservation is now generally available.
This feature allows applications to be fully aware of client IP addresses
for Kubernetes services you expose.

Cluster autoscaler now supports for scaling node pools to 0 or 1, for when
you don't need capacity.

Cluster autoscaler can now use a pricing-based expander, which applies additional
cost-based constraints to let you use auto-scaling in the most cost-effective
manner. This is default as of 1.7.0 and is not user-configurable.

Cluster autoscaler now supports balanced scale-outs of similar node groups.
This is useful for clusters that span multiple zones.

You can now use API Aggregation to extend the Kubernetes API with custom APIs.
For example, you can now add existing API solutions such as service catalog,
or build your own.

The following new features are available on Alpha clusters running Kubernetes
version 1.7:

Local storage

External webhook admission controllers

Known Issues with v1.7.0:

Kubelet certificate rotation is not enabled for Alpha clusters. This issue
will be fixed in a future release.

Kubernetes services with network load balancers using static IP will cause the kube-controller-manager to crash loop, leading to multiple master repairs. See issue#48848 for more details. This issue will be fixed in a future release.

June 26, 2017

Known Issues with v1.6.6
A bug in the version of fluentd bundled with Kubernetes
v1.6.6
causes JSON-formatted logs to be exported as plain text. This issue will be
fixed in v1.6.7. Meanwhile v1.6.6 will remain available as an optional
version for new cluster creation and opt-in master upgrades, but will not be
made the default. See issue
#48018 for more
details.

There will be no release for the week of July 3rd, since this is a holiday
in the US. The next release is planned for the week of July 10th.

June 20, 2017

Original plan to upgrade container cluster masters to 1.6 this week has been postponed due to a bug in the GLBC ingress controller
that causes unintentional overwrites of manual health check edits (See
known issues for v1.6.4).
This bug is fixed in 1.6.6.

DeleteNodepool now drains all nodes in the pool before deletion.

You can now run Container Engine clusters in region australia-southeast1 (Sydney).

June 13, 2017

v1.5.7
will no longer be available for new clusters and master upgrades.

All cluster masters will be upgraded to
v1.6.4
in the week of 2017-06-19.

June 5, 2017

Cluster masters running Kubernetes versions v1.6.0 - v1.6.3 will be
upgraded to
v1.6.4
according to the following schedule:

v1.6.0 is no longer available for container cluster node
upgrades/downgrades.

Known Issues

A known issue with Container Engine's IP Rotation
feature can cause it to break Kubernetes features that depend on the proxy
endpoint (such as kubectl exec, kubectl logs), as well as cluster metrics
exports into Stackdriver. This issue only affects your cluster if you ran
CompleteIPRotation, and have also disabled the default SSH
firewall rule for cluster nodes. There is a simple manual fix; see
IP Rotation known issues
for details.

Cluster masters running Kubernetes versions v1.6.0 and v.1.6.1 will be upgraded to
v1.6.2.

April 26, 2017

Kubernetes
v1.6.2
This version will be available for new clusters and opt-in master upgrades.

You can create a cluster with HTTP basic authentication disabled by passing
an empty username:
gcloud container clusters create CLUSTER_NAME --username=""
This feature only works with version 1.6.0 and later.

Fixed a bug where SetMasterAuth would fail silently on clusters below
v1.6.0. SetMasterAuth is only allowed for clusters at v1.6.0 and above.

Fixed a bug for clusters at v1.6.0 and above where fluentd pods were
mistakenly created on all nodes when logging was disabled.

gcloud kubectl version is now 1.6.2 instead of 1.6.0.

April 12, 2017

Kubernetes
v1.6.1
This version will be available for new clusters and opt-in master upgrades
according to the following planned schedule:

Container engine hosted masters will be upgraded to v1.5.6 according to the
planned schedule mentioned above.

Known issue:

gcloud container clusters update --set-password (or --generate-password), for setting or rotating your cluster admin password, does not work on clusters running Kubernetes version 1.5.x or earlier. Please use this method only on clusters running Kubernetes version 1.6.x or later.

April 4, 2017

Kubernetes
v1.6.0
This version will be available for new clusters and opt-in master upgrades
according to the following planned schedule:

Container-Optimized OS
is now generally available. You can create or upgrade clusters and node
pools that use Container-Optimized OS by specifying imageType values of
either COS or GCI.

A new system daemon, node problem detector, is introduced in Kubernetes v1.6
on COS node images. It detects node problems (e.g. kernel/network/container
runtime issues) and reports them as node conditions and events.

Starting in 1.6, a default StorageClass instance with the gce-pd provisioner
is installed. All unbound PVCs that don't specify a StorageClass will
automatically use the default provisioner, which is different behavior from
previous releases and can be disabled by modifying the default StorageClass
and removing the "storageclass.beta.kubernetes.io/is-default-class
annotation". This feature replaces alpha dynamic provisioning, but the
alpha annotation will still be allowed and will retain the same behavior.

gcloud container clusters create|get-credentials will now configure
kubectl to use the credentials of the active gcloud account by default,
instead of using application default credentials. This requires kubectl
1.6.0 or higher. You can update kubectl by running
gcloud components update kubectl.
If you prefer to use application default credentials to authenticate kubectl
to Google Container Engine clusters, you can revert to the previous behavior
by setting the container/use_application_default_credentials property:

gcloud config set container/use_application_default_credentials true

export CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS=true

gcloud command-line tool kubectl version updating to 1.6.0.

New clusters launched at 1.6.0 will use be using etcd3 in the master.
Existing cluster masters will be automatically updated to use etcd3 in a
future release.

Starting in 1.6, RBAC
can be used to grant permissions for users and Service Accounts to the
cluster's API. To help transition to using RBAC, the cluster's legacy
authorization permissions are enabled by default, allowing Kubernetes
Service Accounts full access to the API like they had in previous versions
of Kubernetes. An option will be rolled out soon to allow the legacy
authorization mode to be disabled in order to take full advantage of RBAC.

You can now use gcloud to set or rotate the admin password for Container
clusters by running

gcloud container clusters update --set-password

gcloud container clusters update --generate-password

During node upgrades, Container Engine will now verify and recreate the
Managed Instance Group for a node pool (at size 0) if required.

March 29, 2017

Kubernetes
v1.5.6
is the default version for new clusters. This version will be available for
new clusters and opt-in master upgrades according to the following planned
schedule:

February 23, 2017

February 14, 2017

It is no longer necessary to disable the HttpLoadBalancing add-on when you
create a cluster without adding the compute read/write scope to nodes.
Previously, when you created a cluster without adding the
compute read/write scope, you were required to disable HttpLoadBalacing.

January 31, 2017

gcloud command-line tool kubectl version updating to 1.5.2.

January 26, 2017

The gcloud command-line tool and kubectl 1.5+ support using gcloud credentials for authentication.
Currently, gcloud container clusters create and gcloud container clusters
get-credentials configure kubectl to use Application Default
Credentials
to authenticate to Container Clusters. If these differ from the IAM role that
the gcloud command-line tool is using, kubectl requests can fail authentication
(#30617). With Google
Cloud SDK 140.0.0 and kubectl 1.5+, the gcloud command-line tool can configure kubectl to use its
own credentials. This means that if, e.g., the gcloud command-line is configured to use a
service account, kubectl will authenticate as the same service account.

To enable using the gcloud command-line tool's own credentials, set the
container/use_application_default_credentials property to false:

The current default behavior is to continue using application default
credentials. The gcloud command-line tool credentials will be made the default for kubectl
configuration (via gcloud container clusters create|get-credentials) in a
future release.

December 14, 2016

Node pools can now opt in to automatically upgrade when a new Kubernetes
version becomes available.
See documentation for details.

Node pool upgrades can now be rolled back using the
gcloud alpha container node-pools rollback <pool-name> command.
See gcloud alpha container node-pools rollback --help for more details.

December 7, 2016

Google Cloud Platform Console now allows choosing between
Container-VM Image (GCI) and the deprecated container-vm when adding new node
pools to existing clusters.
To learn more about image types, click
here.

November 1, 2016

Kubernetes v1.4.5 and v1.3.10 include fixes for CVE-2016-5195 (Dirty Cow),
which is a Linux kernel vulnerability that allows privilege escalation. If
your clusters are running nodes with lower versions, we strongly encourage you
to upgrade them to a version of Kubernetes that includes a node image that is
not vulnerable, such as Kubernetes 1.3.10 or 1.4.5. To upgrade a cluster, see
https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade.

Upgrade operations can now be cancelled using gcloud alpha container
operations cancel <operation_id>. See gcloud alpha container operations
cancel --help for more details.

October 17, 2016

Reminder that the base OS image for nodes has changed in the 1.4 release. A
set of known issues have been identified and have been documented
here.
If you suspect that your application or workflow is having problems with new
clusters, you may select the old ContainerVM by following the opt-out
instructions documented
here.

Rewrote the node upgrade logic to make it less disruptive by waiting for the node to
register with the Kubernetes master before upgrading the next node.

Added support for new clusters and node-pools to use preemptible
VM instances by using the --preemptible flag. See
gcloud beta container clusters create --help and
gcloud beta container node-pools create --help for more details.

October 10, 2016

Reminder that the base OS image for nodes has changed in the 1.4 release. A
set of known issues have been identified and have been documented
here.
If you suspect that your application or workflow is having problems with new
clusters, you may select the old ContainerVM by following the opt-out
instructions documented
here.

Fix a bug in gcloud beta container images list-tags.

Add support for kubernetes labels on new clusters and nodepools by passing
--node-labels=label1=value1,label2=value2.... See
gcloud container clusters create --help and
gcloud container nodepools create --help for more details and
examples.

Update kubectl to version 1.4.1.

October 5, 2016

Can now specify the cluster-version when creating Google Container Engine clusters.

Update kubectl to version 1.4.0.

Introduce 1.3.8 as a valid cluster version. 1.3.8 fixes a log rotation leak on the master.

September 27, 2016

Container-VM Image (GCI), which was introduced earlier this year, is now the default
ImageType for new clusters and node-pools. The old container-vm is now deprecated; it
will be supported for a limited time. To learn more about how to use GCI, click here.

Can now create temporary clusters with all kubernetes alpha features enabled
via

kubectl authorization for v1.3.0 clusters fails if a the cluster is
created with a non-default master auth username (gcloud container
clusters create --username ...). This can be worked around by
authenticating with the cluster certificate instead by running

kubectl config unset users.gke_$PROJECT_$ZONE_$NAME.username

on the machine from which you want to run kubectl, where
$PROJECT,$ZONE,$NAME are the cluster's project id, zone and name,
respectively.

May 18, 2016

gcloud alpha container commands (e.g. create) now support specifying
alternate ImageTypes, such as the newly-available Beta
Container-VM Image.
To try it out, update to the latest gcloud (gcloud components install alpha ;
gcloud components update) and then create a new cluster: gcloud alpha
container clusters create --image-type=GCI $NAME. Support for ImageTypes in
Google Cloud Console will follow at a later date.

The gcloud container clusters list command now sorts the clusters
based on zone and then on cluster name.

April 29, 2016

April 21, 2016

Can now create a multi-zone cluster, which is a cluster whose nodes span
multiple zones, enabling higher availability of applications running in the
cluster. More details on multi-zone clusters can be found at
http://kubernetes.io/docs/admin/multiple-zones/. The ability to convert
existing clusters to be multi-zone will be coming soon.

gcloud container clusters create now allows specifying multiple zones within
a region for your cluster's nodes to be created in by using the
--additional-zones flag.

Fixed bug that caused kubectl component to be missing from
gcloud components list on Windows.

gcloud command-line tool kubectl version updated to v1.2.2

April 13, 2016

Known issue: the "bastion route"
workaround for accessing services from outside of a kubernetes cluster no
longer works with 1.2.0 - 1.2.2 nodes, due to a change in kube-proxy. If you
are using this workaround, we recommend not upgrading nodes to 1.2.x at this
time. This will be addressed in a future patch release.

Clusters created without compute read/write node scopes must also disable
HttpLoadBalancing.
Note that disabling compute read/write is only possible via the raw API, not the
gcloud command-line tool or the Google Cloud Platform Console.

ClusterUpdates to clusters whose node scopes do not have compute read/write
must also specify an AddonsConfig with HttpLoadBalancing
disabled.

gcloud command-line tool kubectl version updated to 1.2.0.

March 16, 2016

CreateCluster will now succeed if the kubernetes API reports at least 99% of
nodes have registered and are healthy within a startup deadline.

March 2, 2016

February 26, 2016

DeleteCluster will fail fast with an error if there are backend services that
target the cluster's node group, as existence of such services will block
deletion of the nodes.

You can now self-initiate an upgrade of a cluster's hosted master to the
latest supported Kubernetes version by running
gcloud container clusters upgrade --master. This lets you access versions
ahead of automatic Container Engine hosted master upgrades.

December 8, 2015

Create cluster now checks that the network for the cluster has a route to the
default internet gateway. If no such route exists, the request returns with an
error immediately, instead of timing out waiting for the nodes to register.

November 30, 2015

Container Engine now supports manual-subnet networks.
Subnetworks are an Alpha feature of Google Compute Engine and you must be
whitelisted to use them. See the Subnetworks
documentation for whitelist information.

August 14, 2015

The compute and devstorage.read_only auth scopes are no longer required
and are no longer automatically added server-side to new clusters. The
gcloud command and GCP Console still add these scopes on the
client side when creating new clusters; the REST API does not.

Listing container clusters in a non-existent zone now results in a
404: Not Found error instead of an empty list.

The get-credentials command has moved to
gcloud beta container clusters get-credentials. Running
gcloud beta container get-credentials prints an error redirecting to the new
location.

--cluster-api-version removed. Cluster version not selectable
in v1 API; new clusters always created at latest supported version.

--image option removed. Source image not selectable in v1 API;
clusters are always created with latest supported ContainerVM image.
Note that using an unsupported image (i.e. not ContainerVM) would
result in an unusable cluster in most cases anyway.

Added --no-enable-cloud-monitoring to turn off cloud monitoring
(on by default).

Added --disk-size option for specifying boot disk size of node VMs.

July 27, 2015

A firewall rule is now created at the time of cluster creation to make node
VMs accessible via SSH. This ensures that the Kubernetes proxy functionality
works.

Disabled the --source-image option in the v1beta1 API. Attempting to
run gcloud alpha container clusters create --source-image now returns an
error.

Removed the option to create clusters in the 172.16.0.0/12 private IP block.

July 24, 2015

Upgrade to Kubernetes v1 - Action Required

Users must upgrade their configuration files to the v1 Kubernetes API
before August 5th, 2015. This applies to any Beta Container Engine cluster
created before July 21st.

Google Container Engine will upgrade container cluster masters beginning on
August 5th, to use the v1 Kubernetes API. If you'd like to upgrade
prior, please
sign up for an early upgrade.

This upgrade removes support for the v1beta3 API. All configuration files
must be formatted according to the v1 specification to ensure that your
cluster remains functional. The v1 API represents the production-ready set of
APIs for Kubernetes and Container Engine.

July 15, 2015

Existing masters running versions 0.19.3 or higher will be upgraded to 0.21.2.
Customers should
upgrade their container clusters at
their convenience. Clusters running versions older than 0.19.3 can not be
updated.

The kubectl version is now 0.20.2.

July 10, 2015

The rolling-update command will fail when using kubectl v0.20.1 with
clusters running v0.19.3 of the Kubernetes API. To resolve the issue, specify
--api-version=v1beta3 as a flag to the rolling-update command:

May 2, 2015

Clusters that don't have nginx will use bearer token auth instead of basic
auth.

KUBE_PROXY_TOKEN added to kube-env metadata.

April 22, 2015

A CIDR can now be requested during cluster creation when using the
gcloud command-line tool or the REST API. For the gcloud command-line tool, use the
--container-ipv4-cidr flag. If not set, the server will choose a
CIDR for the cluster.

Standalone kubectl instructions are now available from
gcloud alpha container kubectl --help.

When fetching cluster credentials after creating a cluster using the
gcloud command-line tool, you'll never have to enter the passphrase for your SSH
key more than once.

Clusters created by the gcloud command-line tool now automatically send logs to
Google Cloud Logging unless explicitly disabled using the
--no-enable-cloud-logging flag. Logs are visible in the
logs section of the GCP Console once your
project has enabled the Google Cloud Logging API.

You can now access Container Engine clusters with standalone kubectl
(i.e. without gcloud alpha container) after setting an environment
variable, which is printed after successful
cluster creation and/or the first time accessing a cluster with
gcloud alpha container kubectl.

Gcloud will always try to fetch certificate files for the cluster if they are
missing. "WARNING: No certificate files found in..." will resolve itself on a
subsequent gcloud alpha container kubectl command run if the cluster is
healthy.

Known issue: container commands are included in the alpha component, but
the kubernetes client (kubectl) is still installed with the preview
component, so users will need both.

April 1, 2015

All Container Engine commands have moved from gcloud preview togcloud alpha. Run gcloud components update alpha to install
this command group. Documentation has been updated to use the alpha
commands.

March 25, 2015

Kubernetes v0.13.2 is the default version for new clusters.

The kubectl version is now v0.13.1.

Updated to container-vm-v20150317, which starts up more reliably.

The default boot disk size for cluster nodes has been increased from 10GB to
100GB.

January 21, 2015

Improved the reliability of cluster creation when provisioning is slow.

January 15, 2015

Kubernetes v0.8.1 is the default version for newly created clusters. Our
v0.8.1 support includes changes on the 0.8 branch at 0.8.1.

Removed support for creating clusters at Kubernetes v0.8.0.
Existing clusters at this version can still be used and deleted.

Service accounts and auth scopes can be added to node instances at the time
of creation for all pods to use.

The command line interface now renders multiple error messages across
newlines and tabs, instead of using a comma separator.

Machine type information has been fixed in the cluster details page of the
Google Cloud Platform Console.

January 8, 2015

Kubernetes v0.8.0 is the default version for newly created clusters.
Kubernetes v0.7.1 is also supported. Refer to the
Kubernetes release notes
for information about each release. Our v0.7.1 support includes changes on the
0.7 branch at 0.7.1. Our v0.8.0 support includes changes in the 0.7.2 and
0.8.0 releases.

Removed support for creating clusters at Kubernetes v0.6.1 and v0.7.0.
Existing clusters at these versions can still be used and deleted.

The pods|services|replicationcontrollers create commands now validate
the resource type when creating with --config-file. This fixes the known
issue in the December 12, 2014 release.

December 19, 2014

Kubernetes v0.7.0 is the default version for newly created clusters.

Removed support for creating clusters at Kubernetes v0.4.4 and v0.5.5.
Existing clusters at these versions can still be used and deleted.

December 12, 2014

Known issues:

The pods|services|replicationcontrollers create commands do not validate
the resource type when creating with --config-file. The command creates
the resource specified in the configuration file, regardless of the command
group specified. For example, calling pods create and passing a service
configuration file creates a service instead of failing.

Updates:

Kubernetes v0.6.1 is the default version for newly created clusters.

Google Container Engine now reserves a /14 CIDR range for new clusters.
Previously, a /16 was reserved.

New clusters created with Kubernetes v0.4.4 now use the
backports-debian-7-wheezy-v20141108 image. This replaces the previous
backports-debian-7-wheezy-v20141021 image.

New clusters created with Kubernetes v0.5.5 or v0.6.1 now use the
container-vm image, instead of the Debian backports image.

The Service Operations
documentation has been updated to describe the createExternalLoadBalancer
option.

A new gcloud preview container kubectl command has been added to the CLI.
This is a pass-through command to call the native Kubernetes
kubectl
client with arbitrary commands, using the gcloud command-line tool to handle authentication.

The --cluster-name flag in all CLI commands has been renamed to --cluster.

New describe and list support for cluster operations.

December 5, 2014

The syntax for creating a pod with the Google Container Engine command line
interface has changed. The name of the pod is now specified as the value of
a --name flag. See the
Pod Operations page for details.

Clusters and Operations returned by the API now include a selfLink field and
Operations also include a targetLink field, which contain the full URL of
the given resource.

Added support for Kubernetes v0.4.4 and Kubernetes v0.5.5. The default
version is now v0.4.4. Refer to the
Kubernetes release notes
for information about each release. Our v0.4.4 support includes changes on the
0.4 branch from 0.4.2 through 0.4.4. Our v0.5.5 support includes changes on
the 0.5 branch through 0.5.5.

Removed support for creating clusters at Kubernetes v0.4.2. Existing clusters
at this version can still be used and deleted.

November 20, 2014

Updates to the gcloud preview container commands:

New error message that catches cluster creation failure due to missing
default network.

There is currently a bug preventing the default cluster name from working
if the local configuration cache is missing. If you see a stack trace
when omitting --cluster-name, repeat the command once with the flag
specified. Subsequent commands can omit the flag.

The default cluster name is set to the value of the new cluster when a
cluster is successfully created.

The gcloud preview container clusters list command lists clusters across
all zones if no --zone flag is specified. The list command ignores any
default zone that may be set.

November 4, 2014

(Updated November 10, 2014: Added two additional known issues with GoogleContainer Engine.)

Google Container Engine is a new service that creates and manages
Kubernetes clusters for Google Cloud Platform users.

Container Engine is currently in Alpha state; it is suitable for
experimentation and is intended to provide an early view of the production
service, but customers are strongly encouraged not to run production workloads
on it.

The underlying open source Kubernetes project is being actively developed by
the community and is not considered ready for production use. This version of
Google Container Engine is based on
Kubernetes public build v0.4.2.
While the Kubernetes community is working hard to address
community-reported issues as they are reported, there are some known issues in
the v0.4.2 release that will be addressed in v0.5 and that will be incorporated
into Google Container Engine in the coming days.

Known issues with the Kubernetes 0.4.2 release

(Issue #1730)
External health checks that use in-container scripts (exec) do notwork. Process, HTTP and TCP health checks work properly.
Health checks that use in-container shell execution are not functioning;
they always report Unknown. This is a result of the transition to
docker exec introduced in Docker version 1.3. At this time process-level
health checks, TCP socket health checks, and HTTP level health checks are
functional. This has been addressed in v0.5 and will be available shortly.

(Issue #1712)
Pod update operations fails.
In v0.4.2, pod update functionality is not implemented, and a call to the
update API returns an unimplemented error. Pods must be updated by tear down
and recreate. This will be implemented in v0.5.

(Issue #974)
Silent failure on internal service port number collision:
Each Kubernetes service needs a unique network port assignment. Currently if
you try to create a second service with a port number that conflicts with an
existing service, the operation succeeds but the second service will not
receive network traffic. This has been fixed, and will be available in v0.5.

(Issue #1161)
External service load balancing. The current Kubernetes design includes
a model that does a 1:1 mapping between an externally-exposed port number at
the cluster level, and a service. This means that only a single external
service can exist on a given port. For now this is a hard limitation of the
service.

Known issues with Google Container Engine

In addition to issues with the underlying Kubernetes project, there are some
known issues with the Google Container Engine tools and API that will be
addressed in subsequent releases.

Kubecfg binary conflicts: During the Google Cloud Platform SDK
installation, kubecfg v0.4.1 is installed and placed on the path by the
Google Cloud SDK. Depending on your $PATH variable, this version may
conflict with other installed versions from the open source Kubernetes
product.

Containers are assigned private IPs in the range10.40.0.0/16 to 10.239.0.0/16.
If you have changed your default network settings from 10.240.0.0/16,
clusters may create successfully, but fail during operation.

All Container Engine nodes are started with and require project levelread-write scope. This is temporarily required to support the dynamic
mounting of PD-based volumes to nodes. In future releases nodes will revert
to default read-only project scope.

Windows is not currently supported. The gcloud preview container
command is built on top of the Kubernetes client's kubecfg binary, which is
not yet available on Windows.

The default network is required. Container Engine relies on the existence
of the default network, and tries to create routes that use it. If you don't
have a default network, Container Engine cluster creation will fail.