If you are not sure which option to use, the most common solution
is to
add a persistent disk
to your instance.

Introduction

By default, each Compute Engine
instance has a single boot persistent disk that contains the operating system.
When your apps require additional storage space, you can add one or
more additional storage options to your instance. Read
Compute Engine pricing for cost
comparisons.

Zonal standardpersistent disks

Regionalpersistent disks

Zonal SSDpersistent disks

RegionalSSD persistent disks

Local SSDs

Cloud Storage buckets

Storage type

Efficient and reliable block storage

Efficient and reliable block storage
with synchronous replication across two zones in a region

Fast and reliable block storage

Fast and reliable block storage with synchronous replication across
two zones in a region

Zonal Persistent disks (Standard and SSD)

Persistent disks are durable network storage devices that your instances can
access like physical disks in a desktop or a server. The data on each
persistent disk is distributed across several physical disks.
Compute Engine manages the physical disks and the data distribution
to ensure redundancy and optimize performance for you. Standard persistent
disks are backed by
standard hard disk drives (HDD).
SSD persistent disks are backed by
solid-state drives (SSD).

Persistent disks are located independently from your virtual machine (VM)
instances,
so you can detach or move persistent disks to keep your data even after you
delete your instances. Persistent disk performance scales automatically with
size, so you can resize your existing persistent disks or add more persistent
disks to an instance to meet your performance and storage space requirements.

Ease of use

Compute Engine handles most disk management tasks for you so that
you do not need to deal with partitioning, redundant disk arrays, or subvolume
management. You can apply these practices to your persistent disks if you want,
but you can save time and get the best performance if you
format your persistent disks
with a single file system and no partition tables. If you need to separate
your data into multiple unique volumes,
create additional disks
rather than dividing your existing disks into multiple partitions.

Performance

Persistent disk performance is predictable and scales linearly with
provisioned capacity until the limits for an instance's provisioned vCPUs are
reached. See
Optimizing Persistent Disk and Local SSD Performance
for detailed information about performance scaling limits and optimization.

Standard persistent disks are efficient and economical for handling
sequential read/write operations, but are not optimized to handle high rates of
random input/output operations per second (IOPS). If your applications require
high rates of random IOPS, use SSD persistent disks. SSD persistent disks
are designed for single-digit millisecond latencies. Observed latency is
application-specific.

Compute Engine optimizes performance and scaling on persistent disks
automatically. You do not need to stripe multiple disks together or pre-warm
disks to get the best performance. When you need more disk space or better
performance, simply
resize your disks
(and possibly add more vCPUs)
to add more storage space, throughput, and IOPS. Persistent disk performance is
based upon the total persistent disk capacity attached to an instance and the
number of vCPUs that the instance has.

For boot devices, you can reduce costs by using a standard
persistent disk. Small 10 GB persistent disks can work for basic boot and
package management use cases. However, to ensure consistent performance for more
general use of the boot device, use either an SSD persistent disk as your
boot disk or use a standard persistent disk that is at least 200 GB in size.

Each persistent disk write operation contributes to the cumulative network
egress traffic for your instance. This means that persistent disk write
operations are capped by the
network egress cap
for your instance.

Reliability

Persistent disks have built-in redundancy to protect your data against
equipment failure and to ensure data availability through datacenter
maintenance events. Checksums are calculated for all persistent disk operations
so we can ensure that what you read is what you wrote.

Additionally, you can
create snapshots of persistent disks to
protect against data loss due to user error. Snapshots are incremental, and
take only minutes to create even if you snapshot disks that are attached
to running instances.

Persistent Disk Encryption

Compute Engine automatically encrypts your data before it travels
outside of your instance to persistent disk storage space. Each persistent disk
remains encrypted either with system-defined keys or with
customer-supplied keys.
Additionally, Google distributes persistent disk data across multiple physical
disks in a manner that users do not control.

When you delete a persistent disk, Google discards the cipher keys,
rendering the data irretrievable. This process is irreversible.

Each persistent disk can be up to 64 TB in size, so there is no need to manage
arrays of disks to create large logical volumes. Each instance can attach only
a limited amount of total persistent disk space and a limited number of
individual persistent disks.
Predefined machine types
and custom machine types
have the same persistent disk limits.

Most instances can have up to 64 TB of total persistent disk space attached.
Shared-core machine types
are limited to 3 TB of total persistent disk space. Total persistent disk space
for an instance includes the size of the boot persistent disk.

Note: If you created an instance before March 30, 2016, it might retain an
older 10 TB limit for total persistent disk space. Recreate those instances
to update their limits to the new 64 TB limit per instance.

Regional persistent disks (Standard and SSD)

Regional persistent disks have storage qualities that are similar to both
standard and SSD persistent disks. However,
regional persistent disks provide durable storage and replication of data
between two zones in the same region. If you are designing robust systems on Compute Engine, consider
using regional persistent disks to maintain high availability for resources
across multiple zones. Regional persistent disks provide synchronous replication
for workloads that might not have application-level replication.

Regional persistent disks are designed for workloads that require redundancy
across multiple zones with failover capabilities. Regional
persistent disks are also designed to work with
regional managed instance groups.
Regional persistent disks are an option for high performance databases and
enterprise applications that also require high availability.

In the unlikely event of a zonal outage, you can failover your workload running
on regional persistent disks to another zone using the
force-attach command.
The force-attach command allows you to attach the regional persistent disk to a
standby VM instance even if the disk cannot be detached from the original VM
due to its unavailability.

A write is acknowledged back to a VM only when it is durably persisted in both
replicas. If one of the replicas is unavailable, Compute Engine
only writes to the healthy replica. When the unhealthy replica is back up
(as detected by Compute Engine), then it is transparently brought in
sync with the healthy replica and the fully synchronous mode of operation
resumes. This operation is transparent to a VM.

In the rare event both replicas become unavailable at the same time, or the
healthy replica becomes unavailable while another one is being brought into sync,
the corresponding disk becomes unavailable.

Performance

Regional persistent disks are an option when write performance is less critical
than data redundancy across multiple zones.

Like standard persistent disks, regional persistent disks can achieve greater
IOPS and throughput performance on instances with a greater number of vCPUs.
Read SSD persistent disk performance limits
for details about this and other limitations.

When you need more disk space or better performance, you can
resize your regional disks
to add more storage space, throughput, and IOPS.

Reliability

Compute Engine replicates data of your regional persistent disk to the
zones you selected when you created your disks. The data of each replica is
spread across multiple physical machines within the zone to ensure redundancy.

Similar to regular persistent disks, you can
create snapshots of persistent disks to
protect against data loss due to user error. Snapshots are incremental, and
take only minutes to create even if you snapshot disks that are attached
to running instances.

Local SSDs

Local SSDs are physically attached to the server that hosts your virtual
machine instance. Local SSDs have higher throughput and lower latency than
standard persistent disks or SSD persistent disks. The data that you store on a
local SSD persists only until the instance is stopped or deleted. Each local
SSD is 375 GB in size, but you can attach up to eight local SSD
devices for 3 TB of total local SSD storage space per instance.

Warning: The performance gains from Local SSDs require certain trade-offs
in availability, durability, and flexibility. Because of these trade-offs,
local SSD storage is not automatically replicated and
all data on the local SSD might be lost if the instance terminates for
any reason. See
Local SSD data persistence
for details.

Create an instance with local SSDs
when you need a fast scratch disk or cache and do not want to use instance
memory. Also use local SSDs when your workload itself is replicated across
multiple instances.

Local SSD performance depends heavily on which interface you select. Local SSDs
are available through both
SCSI and
NVMe
interfaces. If you choose to use NVMe, you must use a special NVMe-enabled
image to achieve the best performance. For more information, see
Choosing a disk interface type.

Local SSD Encryption

Data persistence on local SSDs

The data that you store on a local SSD persists only until the instance is
stopped or deleted.

Data on your local SSDs persists through
live migration
events. If Compute Engine migrates an instance with a local SSD,
Compute Engine copies data from your local SSD to the new instance
with only a short period of decreased performance.

General limitations

You can create instances with up to eight 375 GB local SSD partitions for 3 TB
of local SSD space for each instance.

Performance for local SSDs scales up until you reach a total Local SSD storage
space of 1.5 TB. Beyond 1.5 TB, throughput and IOPS does not increase.

Local SSDs and machine types

You can attach local SSDs to most machine types available on Compute Engine,
unless otherwise noted,
but there are constraints around how many local SSDs you can attach based on
each machine type. For example, if you are using an N2 machine type with 2 vCPUs,
then, according to the table below, you can attach either 1, 2, 4, or 8 local SSD
devices to that VM, but cannot attach 3, 5, 6, or 7 devices.

Use the table below to understand your options for attaching local SSDs to
different machine types.

N1 machine types

# of allowed local SSD devices per VM instance

All N1 machine types

1 - 8

N2 machine types

Machine types with 2 to 10 vCPUs, inclusive

1, 2, 4, or 8

Machine types with 12 to 20 vCPUs, inclusive

2, 4, or 8

Machine types with 22 to 40 vCPUs, inclusive

4 or 8

Machine types with 42 to 80 vCPUs, inclusive

8

C2 machine types

Machine types with 4 or 8 vCPUs

1, 2, 4, or 8

Machine types with 16 vCPUs

2, 4, or 8

Machine types with 30 vCPUs

4 or 8

Machine types with 60 vCPUs

8

Local SSDs and preemptible VM instances

You can start a preemptible VM instance with a
local SSD
and Compute Engine will charge you
preemptible prices
for the local SSD usage. Local SSDs attached to preemptible instances work
like normal local SSDs and will only persist for the life of the instance.
You can request a separate quota for
preemptible local SSDs but you can also choose to use your regular local SSD
quota when creating preemptible local SSDs.

Compute Engine does not
charge you for local SSDs if their instances are preempted in the first
minute after they start running.

Performance

The performance of Cloud Storage buckets depends on the
storage class that you select and the
location of the bucket relative to your instance.

The Standard Storage class used in the same location as your instance has
performance that is comparable to persistent disks but with higher
latency and less consistent throughput characteristics. The Standard Storage
class used in a multi-regional location stores your data redundantly across at
least two regions within a larger multi-regional location.

Nearline and Coldline Storage classes are primarily for long-term data
archival. Unlike the Standard Storage class, these
archival classes have minimum storage durations and read charges. Consequently,
they are best for long-term storage of infrequently-accessed data.

Reliability

All Cloud Storage buckets have built-in redundancy to protect your data against
equipment failure and to ensure data availability through datacenter
maintenance events. Checksums are calculated for all Cloud Storage operations
so we can ensure that what you read is what you wrote.

Flexibility

Unlike persistent disks, Cloud Storage buckets are not restricted
to the zone where your instance is located. Additionally, you can read and write
data to a bucket from multiple instances simultaneously. For example, you can
configure instances in multiple zones to read and write data in the same bucket
rather than replicate the data to persistent disks in multiple zones.

Furthermore, you can
mount a Cloud Storage bucket
to your instance as a file system. Mounted buckets function similarly to a
persistent disk when you read or write files. However, Cloud Storage buckets
are object stores that do not have the same write constraints like a POSIX
file system and cannot be used as boot disks. Your instance can write data to
a file and overwrite critical data from other instances that are also writing
data to the storage object simultaneously.

Cloud Storage Encryption

Compute Engine automatically encrypts your data before it travels
outside of your instance to Cloud Storage buckets. You do not need to
encrypt files on your instances before you write them to a bucket.