Features

Computing capabilities

Extremely powerful computing capabilities of various GPUs

GA1 instance

A GA1 instance can provide a maximum of four AMD Fire Pro S7150 GPUs, 56 vCPUs, and 160 GB of memory. It has 32 GB of GPU memory and 8192 cores that work in parallel, and delivers up to 15 TFLOPS of single-precision, and 1 TFLOPS double-precision, floating-point performance.

GN4 instance

A GN4 instance can provide a maximum of two NVIDIA Tesla M40 GPUs, 56 vCPUs, and 96 GB of memory. It has 24 GB of GPU memory and 6000 cores that work in parallel, and delivers up to 14 TFLOPS of single-precision floating-point performance.

GN5 instance

A GN5 instance can provide a maximum of eight NVIDIA Tesla P100 GPUs, 56 vCPUs, 480 GB of memory, and 128 GB of GPU memory. It delivers up to 74.4 TFLOPS of single-precision floating-point performance. This helps achieve large-scale parallel floating-point computation performance required in deep learning and other general-purpose GPU computation scenarios. A GN5 instance also provides up to 37.6 TFLOPS of double-precision floating-point performance to deliver high computing performance required in scenarios such as scientific computing.

GN5i instance

A GN5i instance can provide a maximum of two NVIDIA Tesla P4 GPUs, 56 vCPUs, and 224 GB of memory. It has 16 GB of GPU memory and delivers up to 11 TFLOPS of single-precision floating-point performance and 44 TOPS INT8 of computing capability.

GN6 instance

A GN6 instance can provide a maximum of eight NVIDIA Tesla V100 GPUs, 88 vCPUs, and 256 GB of memory. It has 128 GB of GPU memory. Using Tensor Cores, a GN6 instance can provide up to 1000 TFLOPS of deep learning computing capability, and a single-precision floating-point performance of 125.6 TFLOPS. This helps achieve large-scale parallel floating-point computation performance required in general-purpose GPU computation scenarios. A GN6 instance also provides up to 62.4 TFLOPS of double-precision floating point performance to deliver high computing performance required in scenarios such as scientific computing.

Extraordinary general network performance

The excellent network performance delivered by EGS maximizes computing and rendering performance for a wide range of complex computational scenarios.

Elastic GPU instances have a high-speed local cache, and can be attached with ultra cloud disks or SSD cloud disks. This ensures high availability of data and maximizes the computation and rendering performance.

Multiple payment methods

You can choose the payment method that best suits your needs.

Pay yearly

Pay for instance use on a yearly basis to maximize your discount benefit.

Pay monthly

Pay for instance use on a monthly basis to maintain reasonable costs during each payment while also enjoying relatively low hourly price for instance use.

Pay hourly

Pay for instance use on an hourly basis to satisfy your temporary need of compute resources. You are billed for the lowest cost each time.

Highly reliable cloud storage that is based on three-copy redundancy can be attached to GA1 instances. Additionally, NVMe drives with up to 1.4 TB capacity can also run on GA1 instances. These NVMe drives can handle 230,000 IOPS with an I/O latency of about 200 μs, and provide up to 1900 Mbit/s of read bandwidth and 1100 Mbit/s of write bandwidth. (The instance performance is tested with random 240,000 reads and an IO depth of 12.)

Common Scenarios

Online rendering in the cloud (GA1)

General-purpose GPU computation (GN4)

Outstanding computation acceleration (GN5)

Deep learning inference capabilities (GN5i)

Online rendering in the cloud (GA1)

Online rendering in the cloud

Online rendering using Cloud Desktop

You can quickly access a GA1 instance using Cloud Desktop to experience richer visual and manipulation renderings. You can also use the Remote Desktop Protocol (RDP) to achieve real-time online rendering and graph editing. By using RDP, you can access a GA1 instance from anywhere and perform rendering and graph editing work using multiple types of devices. Data is stored using Network Attached Storage (NAS) or Alibaba Cloud Object Storage Service (OSS). You can pull data from your internal network at any time, which ensures data security. In workplaces, Express Connect and NAT Gateway can be used to improve network experiences and reduce costs.

Integrations and Configurations

A GN4 instance is based on NVIDIA's Maxwell M40 GPU and provides up to 14 TFLOPS of single-precision floating-point performance. This helps achieve large-scale parallel floating-point computation performance required in deep learning and other general-purpose GPU computation scenarios. GN4 instances can be seamlessly integrated into an elastic computing ecosystem to provide solutions that are ideal for either online or offline computation scenarios. Additionally, integrating Container Service into your workflow can help simplify deployment and O&M, and provide resource scheduling services.

Integrations and Configurations

A GN5 instance is based on NVIDIA Tesla P100 GPU and provides up to 74.4 TFLOPS of single-precision floating-point performance. This helps achieve large-scale parallel floating-point computation performance required in deep learning and other general-purpose GPU computation scenarios. A GN5 instance also provides up to 37.6 TFLOPS of double-precision floating-point performance to deliver high computing performance required in scenarios such as scientific computing. GN5 instances support the GPUDirect P2P technology. In this way, GPUs can directly communicate with each other by using PCI buses, greatly reducing inter-GPU communication latency. GN5 instances can be seamlessly integrated into an elastic computing ecosystem to provide solutions that are ideal for either online or offline computation scenarios.

Additionally, making full use of Container Service can help simplify deployment and O&M, and provide resource scheduling services. The Image Market provides a GN5 instance image that is equipped with an NVIDIA GPU driver and a deep learning framework, which simplifies deployment.

Integrations and Configurations

A GN5i instance is based on NVIDIA Tesla P4 GPU and provides up to 11 TFLOPS of single-precision floating-point performance and 44 TOPS INT8 of computing capability that are ideal for deep learning scenarios, especially for inference. Additionally, a single GPU only consumes 75 W of power while maintaining a high-performance output. GN5i instances can be seamlessly integrated into an elastic computing ecosystem to provide solutions that are ideal for either online or offline computation scenarios. Additionally, making full use of Container Service can help simplify deployment and O&M, and provide resource scheduling services. The Image Market provides a GN5i instance image that is equipped with an NVIDIA GPU driver and a deep learning framework, which simplifies deployment.