Reference Architecture for Pivotal Cloud Foundry on AWS

This guide presents a reference architecture for Pivotal Cloud Foundry (PCF) on Amazon Web Services (AWS). This architecture is valid for most production-grade PCF deployments using three availability zones (AZs).

See PCF on AWS Requirements for general requirements for running PCF and specific requirements for running PCF on AWS.

PCF Reference Architectures

A PCF reference architecture describes a proven approach for deploying Pivotal Cloud Foundry on a specific IaaS, such as AWS, that meets the following requirements:

Secure

Publicly-accessible

Includes common PCF-managed services such as MySQL, RabbitMQ, and Spring Cloud Services

Can host at least 100 app instances, or far more

Pivotal provides reference architectures to help you determine the best configuration for your PCF deployment.

Base AWS Reference Architecture

The following diagram provides an overview of a reference architecture deployment of PCF on AWS using three AZs.

The default EC2 instance quota on a new AWS subscription only has around 20 EC2 instances, which is not enough to host a multi-AZ deployment. The recommended quota for EC2 instances is 100. AWS requires the instances quota tickets to include Primary Instance Types, which should be t2.micro.

Network Objects

The following table lists the network objects in this reference architecture.

Network Object

Notes

Estimated Number

External Public IPs

One per deployment, assigned to Ops Manager.

1

Virtual Private Network (VPC)

One per deployment. A PCF deployment exists within a single VPC and a single AWS region, but should distribute PCF jobs and instances across 3 AWS AZs to ensure a high degree of availability.

This reference architecture requires 4 route tables: one for the public subnet, and one each for all 3 private subnets across 3 AZs. Consult the following list:

PublicSubnetRouteTable: This routing table enables the ingress/egress routes from/to Internet through the Internet gateway for Ops Manager and the NAT Gateway.

PrivateSubnetRouteTable: This routing table enables the egress routing to the Internet through the NAT Gateway for the BOSH Director and PAS.

For more information, see the Terraform script that creates the route tables and the script that performs the route table association.

Note: If an EC2 instance sits on a subnet with an Internet gateway attached as well as a public IP address, it is accessible from the Internet through the public IP address; for example, Ops Manager. PAS needs Internet access due to the access needs of using an S3 bucket as a blobstore.

4

Security Groups

The reference architecture requires 5 Security Groups. For more information, see the Terraform Security Group rules script. The following table describes the Security Group ingress rules:

Security Group

Port

From CIDR

Protocol

Description

OpsMgrSG

22

0.0.0.0/0

TCP

Ops Manager SSH access

OpsMgrSG

443

0.0.0.0/0

TCP

Ops Manager HTTP access

VmsSG

ALL

VPC_CIDR

ALL

Open up connections among BOSH-deployed VMs

MysqlSG

3306

VPC_CIDR

TCP

Enable network access to RDS

ElbSG

80

0.0.0.0/0

TCP

HTTP to PAS

ElbSG

443

0.0.0.0/0

TCP

HTTPS to PAS

ElbSG

4443

0.0.0.0/0

TCP

WebSocket connection to Loggregator endpoint

SshElbSG

2222

0.0.0.0/0

TCP

SSH connection to containers

Note: The extra port of 4443 with the Elastic Load Balancer is due to the limitation that the Elastic Load Balancer does not support WebSocket connections on HTTP/HTTPS.

5

Load Balancers

PCF on AWS requires the Elastic Load Balancer, which can be configured with multiple listeners to forward HTTP/HTTPS/TCP traffic. Two Elastic Load Balancers are recommended: one to forward the traffic to the Gorouters, PcfElb, the other to forward the traffic to the Diego Brain SSH proxy, PcfSshElb. For more information, see the Terraform load balancers script.

The following table describes the required listeners for each load balancer:

ELB

Instance/Port

LB Port

Protocol

Description

PcfElb

gorouter/80

80

HTTP

Forward traffic to Gorouters

PcfElb

gorouter/80

443

HTTPS

SSL termination and forward traffic to Gorouters

PcfElb

gorouter/80

4443

SSL

SSL termination and forward traffic to Gorouters

PcfSshElb

diego-brain/2222

2222

TCP

Forward traffic to Diego Brain for container SSH connections

Each ELB binds with a health check to check the health of the back-end instances:

PcfElb checks the health on Gorouter port 80 with TCP

PcfSshElb checks the health on Diego Brain port 2222 with TCP

2

Jumpbox

Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access to Pivotal Network to download tiles. Using a jumpbox is particularly useful in IaaSes where Ops Manager does not have a public IP address. In these cases, you can SSH into Ops Manager or any other component through the jumpbox.

1

Integrate PCF with Customer Data Center through VPN

At times, applications on PCF need to access on-premise data. The connection between an AWS VPC and an on-premise datacenter is made through VPN peering. When employing non-VPN peering, there are several points to consider:

Assign routable IP addresses with the following in mind:

It may not be realistic to request multiple routable /22 address spaces, due to IP exhaustion.

Using different VPC address spaces can cause snowflakes deployments and present difficulties in automation.

Only make the load balancer, NAT devices, and Ops Manager routable.

PCF components can route egress through a NAT instance. As a result, operators do not need to assign routable IP addresses to PCF components.