Reference Architecture for Pivotal Cloud Foundry on OpenStack

This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on OpenStack. This architecture is valid for most production-grade PCF deployments in a single project using three availability zones (AZs).

Two service accounts are recommended: one for OpenStack “paving,” and the other for Ops Manager and BOSH. Consult the following list:

Admin Account: Concourse will use this account to provision required OpenStack resources as well as a Keystone service account.

Keystone Service Account: This service account will be automatically provisioned with restricted access only to resources needed by PCF.

OpenStack Quota

The default compute quota on a new OpenStack subscription is typically not enough to host a multi-AZ deployment. The recommended quota for instances is 100. Your OpenStack network quotas may also need to be increased.

OpenStack Objects

The following table lists the network objects in this reference architecture.

Network Object

Notes

Estimated Number

Floating IP addresses

Two per deployment. One assigned to Ops Manager, the other to your jumpbox.

2

Project

One per deployment. A PCF deployment exists within a single project and a single OpenStack region, but should distribute PCF jobs and instances across three OpenStack AZs to ensure a high degree of availability.

1

Networks

The reference architecture requires the following Tenant Networks:

1 x (/24) Infrastructure (Ops Manager, BOSH Director, Jumpbox).

1 x (/20) PAS (Gorouters, Diego Cells, Cloud Controllers, etc.).

1 x (/20) Services (RabbitMQ, MySQL, Spring Cloud Services, etc.)

1 x (/24) On-demand services (Various.)

An Internet-facing network is also required:

1 x Public network.

Note: In many cases, the public network is an “under the cloud” network that is shared across projects.

5

Routers

This reference architecture requires one router attached to all networks:

The reference architecture requires one Security Groups. The following table describes the Security Group ingress rules:

Security Group

Port

From CIDR

Protocol

Description

OpsMgrSG

22

0.0.0.0/0

TCP

Ops Manager SSH access

OpsMgrSG

443

0.0.0.0/0

TCP

Ops Manager HTTP access

VmsSG

ALL

VPC_CIDR

ALL

Open up connections among BOSH-deployed VMs

Additional security groups may be needed which are specific to your chosen load balancing solution.

5

Load Balancers

PCF on OpenStack requires a load balancer, which can be configured with multiple listeners to forward HTTP/HTTPS/TCP traffic. Two load balancers are recommended: one to forward the traffic to the Gorouters, AppsLB, the other to forward the traffic to the Diego Brain SSH proxy, SSHLB.

The following table describes the required listeners for each load balancer:

Name

Instance/Port

LB Port

Protocol

Description

AppsLB

gorouter/80

80

HTTP

Forward traffic to Gorouters

AppsLB

gorouter/80

443

HTTPS

SSL termination and forward traffic to Gorouters

SSHLB

diego-brain/2222

2222

TCP

Forward traffic to Diego Brain for container SSH connections

Each load balancer needs a check to validate the health of the back-end instances:

AppsLB checks the health on Gorouter port 80 with TCP

SSHLB checks the health on Diego Brain port 2222 with TCP

Note: In many cases, the load balancers are provided as an “under the cloud” service that is shared across projects.

2

Jumpbox

Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access to Pivotal Network to download tiles. Using a jumpbox is particularly useful in IaaSes where Ops Manager does not have a public IP address. In these cases, you can SSH into Ops Manager or any other component through the jumpbox.