This documentation is intended as a quick reference for Rackspace
customers who have questions about Rackspace Kubernetes-as-a-Service.

What is Rackspace Kubernetes-as-a-Service?

Rackspace Kubernetes-as-a-Service (KaaS) is a managed
service that enables Rackspace deployment engineers to provision Kubernetes®
clusters in your Rackspace Private Cloud (RPC) environment. Kubernetes is
an open source orchestration tool that enables system administrators to
manage containerized applications in an automated manner. Running containerized
applications efficiently is a complex task that typically requires a team
of experts to architect, deploy, and maintain your cluster in your specific
environment. Rackspace KaaS does these things for you, so
you can focus on what is important for your business.

The Rackspace KaaS product includes the following features:

The latest open-source version of Kubernetes

Your Kubernetes cluster runs the latest stable community Kubernetes
software and is compatible with all Kubernetes tools. In a default
configuration, three Kubernetes worker nodes are created that ensure high
availability and fault tolerance for your cluster.

Logging and monitoring

Based on popular monitoring tools Prometheus, Elasticsearch™,
Grafana®, and others, the Rackspace KaaS solution
ensures real-time analytics and statistics across your cluster.

Private Docker® image registry

While using public Docker image registries, such as DockerHub and Quay is
still an option, some of your images might require an additional level of
security. A private Docker image registry enables you to store and manage
your own Docker images in a protected location that restricts public access.

Advanced network configuration

Rackspace KaaS uses Calico and Flannel as network
solutions that provide overlay networking and network policies to enable you
to configure a flexible networking architecture. Many cloud environments
require a complex networking configuration that isolates one type of network
traffic from another. Network policies provide answers to many networking
issues.

The following diagram provides a high-level overview of the Rackspace
KaaS solution:

My company has an open-source initiative to avoid vendor lock-in.
Is your Kubernetes Installer open-sourced?

The Rackspace Kubernetes Installer is a fork of an open-source project. In the
future releases this will change. The installer should have no impact on any
initiatives to avoid vendor lock-in because it exists outside of the scope of
what a user will be accessing or operating.

What happens to my Kubernetes cluster if we discontinue Rackspace
KaaS?

The current offering does not account for a Build, Operate, and Transfer (BOT)
model. This feature is on the current roadmap. If this is a requirement or
concern, contact your Rackspace Account Manager to discuss available options.

Why did you not use OpenStack® Magnum to deploy Kubernetes?

While Magnum is an OpenStack upstream project, its design and features do not
correspond to the direction of the upstream Kubernetes community in terms of
functionality, cloud-agnostic tooling, cluster lifecycle management, and
flexibility. Rackspace KaaS is aligned with the
Kubernetes community efforts and provides unified support for multiple
cloud platforms, as well as for bare metal deployments.

What OpenStack components are consumed by the Rackspace KaaS
offering?

The current Rackspace KaaS offering consumes the following OpenStack
components:

OpenStack Compute service (nova)

OpenStack Networking service (neutron)

OpenStack Load Balancing service (octavia)

OpenStack Identity service (keystone)

OpenStack Block Storage service (cinder)

OpenStack DNS service (designate)

OpenStack Object Storage service (swift)

Are there any extra components from the ones listed above?

The Rackspace KaaS offering deploys an authentication
bridge, a user interface, and a token service on the physical servers that are
also known as the OpenStack Infrastructure nodes.

In addition, Rackspace KaaS on OpenStack requires
Cinder backed by Ceph. We use Cinder with Ceph because the Cinder's default
Logic Volume Manager (LVM) backend does not support data replication which is
a requirement for the data volume failover and resiliency of the Kubernetes
worker node.

What load balancer software is used in the Rackspace KaaS
offering?

The choice of load balancers and Ingress Controllers depends on the underlying
cloud platform capabilities. The Rackspace KaaS on OpenStack
leverages a highly available instance of OpenStack Octavia Load Balancer
as a service (LBaaS) with NGINX® Ingress Controllers pre-configured and
deployed.
This configuration enables Day 1 support for application developers to deploy
Kubernetes applications with native type:loadbalancer support for
service exposure.

What do you use as your backend Object store for KaaS?

Rackspace KaaS can leverage several types of storage
depending on the use case. At the object store level, we require access to a
Swift object API or Ceph Rados Gateway (RGW). The object storage stores
backups, snapshots, and container images pushed to the private Docker
registry (Harbor).
These object storage APIs are also exposed for application developers to use
within their applications.

Why is an object store a requirement?

Using object storage for backups, snapshots, container image storage, and
versioning is the Kubernetes community standard. By using the object
storage native features, as opposed to writing those features into a specific
filesystem, Rackspace KaaS enables support for storage and
versioning of disaster recovery "blobs" over a period of months and years.

Why are you choosing to use Ceph?

To support OpenStack, Rackspace KaaS requires an end-to-end,
highly available architecture.

By default, Cinder does not support volume replication. If a single
Cinder host fails, the data stored on that block device is lost. In turn,
Kubernetes cannot fail over data volumes to Kubernetes nodes.
By using Ceph's volume replication, we ensure that all failure
scenarios result in a volume/block device that can fulfill Kubernetes failover
semantics.

Will I have the ability to use other public cloud Infrastructure as a
Service (IaaS) platforms, such as Azure, AWS, or GCP?

Support for other IaaS platforms is something that we are currently examining
and scoping on our product roadmap. If you have an urgent requirement to
support a specific IaaS platform, such as a hybrid/burst scenario, contact
your Rackspace Account Manager to raise the priority to our product team.

Will I be able to federate between the cloud?

Kubernetes federation, as a native Kubernetes feature, is supported out of
the box. However, additional work including inter-data center (DC) or
inter-cloud federation is on our roadmap, as is usability and
security feature integration.

Will there be a single pane of glass UI where I can manage all of my clusters
across IaaS platforms and OpenStack installations?

This functionality is on our product roadmap and we are actively working to
make a unified user experience regardless of infrastructure choice.

What is the reference architecture and base requirements for a Rackspace
Private Cloud powered by OpenStack (RPCO) KaaS environment? How many compute
nodes and how many Ceph nodes do I need?

These are the minimum requirements for a highly available Kubernetes cluster:

3 x Kubernetes Master nodes (VMs). According to the OpenStack Compute
service anti-affinity policy, each Kubernetes Master node is located
on a separate nova host.

5 x etcd nodes (VMs). According to the OpenStack Compute service
anti-affinity policy, each etcd master node is located on a different
nova host.

A custom number of Kubernetes worker nodes is based on the specific deployment
needs and workloads. Ideally, two Kubernetes worker nodes should not be hosted
on the same OpenStack compute node. However, this is not enforced as it
might dramatically increase the total count of the OpenStack compute nodes in
some deployments.

Therefore, you need a minimum of five OpenStack compute hosts to
set affinity rules correctly for the etcd cluster.

By default, Ceph requires a minimum of three nodes for data replication and
resiliency.

Can a Rackspace KaaS cluster use the same control plane as my OpenStack
private cloud?

In a typical OpenStack deployment, you have the control plane and the data
plane. The control plane is the nodes that serve the OpenStack services.
The data plane is the aggregated physical hosts where your workloads
(VMs) run.

Because Rackspace KaaS runs within the context of OpenStack Compute nodes,
it runs within the data plane of your OpenStack deployment. However, supporting
services, such as authentication and others, run on the same control plane
nodes as other OpenStack services.

How does the Rackspace KaaS solution work? Is it correct
that Kubernetes communicates with OpenStack and tells it to spin VMs, and
then Kubernetes deploys Docker containers inside of those OpenStack VMs?

When Kubernetes needs a new node, it creates a new nova API call, the
Kubernetes provisioner configures and installs Kubernetes and supporting
software, and then adds that node to the cluster.

Kubernetes runs multiple container formats. After you deploy a Kubernetes
cluster or add a node, Kubernetes schedules "work" on the worker nodes of
the cluster. In a Kubernetes environment, these are pods, deployments, and
services, and not specifically Docker containers.

Do you have any details on the EFK that will be deployed on our Managed
Kubernetes cluster?

Each Rackspace KaaS deployment includes a fully configured
Elasticsearch, Fluentd, and Kibana (EFK) stack. This installation is meant for
your application developers to use for centralized application logging for the
services and applications that you deploy.

These services are open-source, upstream software with a rational default
configuration for application development use.

How do I view the Kubernetes cluster logs?

When running applications in a Kubernetes cluster, you can use the kubectl
logs command to collect the entire output of an application. Your Kubernetes
cluster is preconfigured to aggregate all of your application logs and make
them searchable by using Elasticsearch and Kibana. You can access these logs
using the Ingress Controller that is provided with your cluster.

To view the logs, complete the following steps:

In your browser, navigate to
https://ingress.${RS_K8S_CLUSTER_NAME}.${RS_K8S_DOMAIN}/logs.

When prompted, use RS_K8S_USERNAME and RS_K8S_PASSWORD.

Will the Elasticsearch database clustering be configured during the
deployment?

By default, Rackspace KaaS is deployed with three
Elasticsearch containers. If more Elasticsearch instances are required, you can
request that your Account Manager increase the replica count of the
Elasticsearch containers to a required number.

Is the logrotate process configured to run on all Kubernetes worker nodes?

Yes. Every Rackspace Kubernetes-as-as-Service installation uses nova VMs that
run a hardened Linux OS (Container Linux) that has log rotation enabled.
However, this functionality is not exposed to end users.

How do I customize Fluentd logs?

Currently, the EFK stack that includes Fluentd as a fully managed
service. If you need to customize your deployment, contact your Account Manager
to provide your use case and work with support to enable the required
customization.

How do I upgrade Fluentd to Treasure Data®?

We do not support the Treasure Data product out of the box. However, the
support team can perform required customizations to enable log export to
additional systems. This means that we can direct logs to multiple systems but
do not support the full-scale replacement of Fluentd.

Can you help me create and troubleshoot my YAML files and troubleshoot my
Kubernetes deployments?

Yes. Rackspace offers best practices and assistance with creating the various
YAML files that are used by the Kubernetes primitives. Rackspace
employees do not replace a team with Kubernetes knowledge, but augment
it.

How do I manage my users?

Rackspace has integrated Kubernetes authentication into the OpenStack
Identity service (keystone). Therefore, you can use the OpenStack Dashboard
and other standard OpenStack tools to manage your users and groups.

How can I reach services that are not running inside a Kubernetes cluster?

For Rackspace KaaS, hybrid workloads are the main use case
and reason that we chose OpenStack as our IaaS platform. We use native
OpenStack Networking service (neutron) for communication between OpenStack and
Kubernetes. For Kubernetes internal networking, we leverage such network
overlay technologies as Flannel, Virtual Extensible LAN (VXLAN), and Calico.
Therefore, services and applications that run within the Kubernetes
cluster can communicate with the services that run in virtual machines in
your OpenStack environment, or with other applications and services that have
network access to the OpenStack environment.

How do I scale a Kubernetes cluster?

Currently, node addition and recovery are performed by our Support personnel.
To request that additional worker nodes be added to your Kubernetes cluster,
submit a ticket in your account control panel.