Choosing the Right Heroku Postgres Plan

Table of Contents

Heroku Postgres offers a wide spectrum of plans
appropriate for everything from personal blogs all the way to
large-dataset and high-transaction applications. Choosing the right
plan depends on the unique usage characteristics of your app as well
as your organization’s availability and uptime expectations.

Plan tiers

Heroku Postgres’s many plans
are segmented in four broad tiers. While each tier has a few differences, the key factor in each tier is the uptime expectation for your database. The four tiers are designed as:

Hobby Tier designed for apps that can tolerate up to 4 hrs of downtime

Standard Tier designed for apps that can tolerate up to 1 hr of downtime

Premium Tier designed for apps that can tolerate up to 15 minutes of downtime

Row limit enforcement

When you are over the hobby tier row limits and try to insert you will see a Postgres error:

permission denied for relation <table name>

The row limits of the hobby tier database plans are enforced with the following mechanism:

When a hobby-dev database hits 7,000 rows, or a hobby-basic database hits 7
million rows , the owner receives a warning e-mail stating they are
nearing their row limits.

When the database exceeds its row capacity, the owner will receive
an additional notification. At this point, the database will receive a
7 day grace period to either reduce the number of records, or
migrate to another plan.

If the number of rows still exceeds the plan capacity after 7
days, INSERT privileges will be revoked on the database. Data can
still be read, updated or deleted from database. This ensures that
users still have the ability to bring their database into compliance,
and retain access to their data.

Once the number of rows is again in compliance with the plan limit,
INSERT privileges are automatically restored to the database. Note
that the database sizes are checked asynchronously, so it may take a
few minutes for the privileges to be restored.

Standard tier

The Standard tier is designed for production applications, where while uptime is important, are able to tolerate up to 1 hour of downtime in a given month. All standard tier databases include:

Within the premium tier plans have differing memory, connection limits, and storage limits. The plans for the premium tier are:

Plan Name

Provisioning name

Cache Size

Storage limit

Connection limit

Monthly Price

Premium0

heroku-postgresql:premium-0

1 GB

64 GB

120

$200

Premium2

heroku-postgresql:premium-2

3.5 GB

256 GB

400

$350

Premium4

heroku-postgresql:premium-4

15 GB

512 GB

500

$1200

Premium6

heroku-postgresql:premium-6

60 GB

1 TB

500

$3500

Premium7

heroku-postgresql:premium-7

120 GB

1 TB

500

$6000

Cache size

Each production tier plan’s
RAM size constitutes the total amount of System Memory on the
underlying instance’s hardware, most of which is given to
Postgres and used for caching. While a small amount of RAM is
used for managing each connection and other tasks, Postgres will
take advantage of almost all this RAM for its cache. Learn more
about how this works in this article

Postgres constantly manages the cache of your data: rows you’ve
written, indexes you’ve made, and metadata Postgres keeps. When the
data needed for a query is entirely in that cache, performance is very
fast. Queries made from cached data are often 100-1000x faster than
from the full data set.

99% or more of queries served from well engineered, high performance
web applications will be served from cache.

Conversely, having to fall back to disk is at least an order of
magnitude slower. Additionally, columns with large data types
(e.g. large text columns) are stored out-of-line via
TOAST,
and accessing large amounts of TOASTed data can be slow.

General guidelines

Access patterns vary greatly from application to application. Many
applications only access a small, recently-changed portion of their
overall data. Postgres can always keep that portion in cache as time
goes on, and as a result these applications can perform well on
smaller plans.

Other applications which frequently access all of their data don’t
have that luxury and can see dramatic increases in performance by
ensuring that their entire dataset fits in memory. To determine the
total size of your dataset use the heroku pg:info command and look
for the Data Size row:

Though a crude measure, choosing a plan that has at least as much
in-memory cache available as the size of your total dataset will
ensure high cache ratios. However, you will eventually reach the point
where you have more data than the largest plan, and you will have to
shard. Plan ahead for sharding: it takes a long time to execute a
sharding strategy.

Determining required cache-size

There is no substitute for observing the database demands of your
application with live traffic to determine the appropriate
cache-size. Cache hit ratio should be in the 99%+ range. Uncommon
queries should be less than 100ms and common ones less than 10ms.

This blog post
includes a deeper discussion of Postgres performance concerns and techniques.

When the cache hit ratio begins to decrease, upgrading your database
will generally put it back in the green. The best way is to use the
fast-changeover technique to move between
plans, watch New Relic, and see
what works best for your application’s access patterns.