You can return to the original look by selecting English in the language selector
above.

General Purpose Instances

General purpose instances provide a balance of compute, memory, and networking resources,
and can be used for a variety of workloads.

M5, M5a, M5ad, and M5d Instances

These instances provide an ideal cloud infrastructure, offering a balance of compute,
memory, and networking resources for a broad range of applications that are deployed
in the cloud. M5 instances are well-suited for the following applications:

m5.metal and m5d.metal instances provide your applications
with direct access to physical resources of the host server, such as processors and
memory. These instances are well suited for the following:

Workloads that require access to low-level hardware features (for example, Intel VT)
that are not available or fully supported in virtualized environments

Applications that require a non-virtualized environment for licensing or support

These instances provide a baseline level of CPU performance with the ability to burst
to
a higher level when required by your workload. An Unlimited instance can sustain high
CPU performance for any period of time whenever required. For more information, see
Burstable Performance Instances.
These instances are well-suited for the following applications:

Instance Performance

EBS-optimized instances enable you to get consistently high performance for your EBS
volumes by eliminating contention between Amazon EBS I/O and other network traffic
from your
instance. Some general purpose instances are EBS-optimized by default at no additional
cost.
For more information, see Amazon EBS–Optimized Instances.

Instance types that use the Elastic Network Adapter (ENA) for enhanced networking
deliver high
packet per second performance with consistently low latencies. Most applications do
not consistently need a high level of
network performance, but can benefit from having access to increased bandwidth when
they send or receive data. Instance sizes
that use the ENA and are documented with network performance of "Up to 10 Gbps" or
"Up to 25 Gbps" use a network I/O credit
mechanism to allocate network bandwidth to instances based on average bandwidth utilization.
These instances accrue credits
when their network bandwidth is below their baseline limits, and can use these credits
when they perform network data transfers.

The following is a summary of network performance for general purpose instances that
support enhanced networking.

SSD I/O Performance

If you use all the SSD-based
instance store volumes available to your instance, you get the IOPS (4,096 byte block
size) performance listed in the following table (at queue depth saturation). Otherwise,
you get lower IOPS performance.

Instance Size

100% Random Read IOPS

Write IOPS

m5ad.large *

30,000

15,000

m5ad.xlarge *

59,000

29,000

m5ad.2xlarge *

117,000

57,000

m5ad.4xlarge *

234,000

114,000

m5ad.12xlarge

700,000

340,000

m5ad.24xlarge

1,400,000

680,000

m5d.large *

30,000

15,000

m5d.xlarge *

59,000

29,000

m5d.2xlarge *

117,000

57,000

m5d.4xlarge *

234,000

114,000

m5d.8xlarge

466,666

233,333

m5d.12xlarge

700,000

340,000

m5d.16xlarge

933,333

466,666

m5d.24xlarge

1,400,000

680,000

m5d.metal

1,400,000

680,000

* For these instances, you can get up to the specified performance.

As you fill the SSD-based instance store volumes for your instance, the number of
write IOPS
that you can achieve decreases. This is due to the extra work the SSD controller must
do to
find available space, rewrite existing data, and erase unused space so that it can
be
rewritten. This process of garbage collection results in internal write amplification
to
the SSD, expressed as the ratio of SSD write operations to user write operations.
This
decrease in performance is even larger if the write operations are not in multiples
of
4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller amount
of bytes
or bytes that are not aligned, the SSD controller must read the surrounding data and
store
the result in a new location. This pattern results in significantly increased write
amplification, increased latency, and dramatically reduced I/O performance.

SSD controllers can use several strategies to reduce the impact of write amplification.
One such strategy is to reserve space in the SSD instance storage so that the controller
can more efficiently manage the space available for write operations. This is called
over-provisioning. The SSD-based instance store volumes provided to
an instance don't have any space reserved for over-provisioning. To reduce write
amplification, we recommend that you leave 10% of the volume unpartitioned so that
the SSD
controller can use it for over-provisioning. This decreases the storage that you can
use,
but increases performance even if the disk is close to full capacity.

For instance store volumes that support TRIM, you can use the TRIM command to notify
the SSD controller whenever you no longer need data that you've written. This provides
the
controller with more free space, which can reduce write amplification and increase
performance.
For more information, see Instance Store Volume TRIM Support.

Release Notes

M4, M5, M5a, M5ad, M5d, t2.large and larger, and t3.large
and larger, and t3a.large and larger instance types require
64-bit HVM AMIs. They have high-memory, and require a 64-bit operating system
to take advantage of that capacity. HVM AMIs provide superior performance in
comparison to paravirtual (PV) AMIs on high-memory instance types. In addition,
you must use an HVM AMI to take advantage of enhanced networking.

M5, M5a, M5ad, M5d, T3, and T3a instances have the following requirements:

M5, M5a, M5ad, M5d, T3, and T3a
instances support a maximum of 28 attachments, including network interfaces, EBS volumes,
and NVMe instance store volumes. Every instance has at least one network interface
attachment. For example, if you have no additional network interface attachments
on an EBS-only instance, you could attach 27 EBS volumes to that instance.

Launching a bare metal instance boots the underlying server, which includes verifying
all
hardware and firmware components. This means that it can take 20 minutes from the
time the instance
enters the running state until it becomes available over the network.

Bare metal instances use a PCI-based serial device rather than an I/O port-based serial
device.
The upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare
metal instances
also provide an ACPI SPCR table to enable the system to automatically use the PCI-based
serial device. The
latest Windows AMIs automatically use the PCI-based serial device.