All the major cloud vendors are betting that artificial intelligence services will become a crucial part of cloud computing over the next few years, and Amazon Web Services became the first cloud vendor to launch services based on the newest AI chips from Nvidia.

The new EC2 instances unveiled Wednesday night will add Nvidia’s Tesla V100 graphics processing units to the flagship compute service. The instances will be available in four regions and users can opt to run their workloads on one, four, or eight Tesla V100 chips, depending on their needs and how much they want to pay.

Researchers working on the complicated math at the forefront of artificial intelligence work have been gobbling up GPUs, which have a specialized architecture that’s different from the regular processors used to power the cloud. GPUs can’t necessarily handle all workloads, but they excel at specialized tasks like AI research.

Nvidia is reaping the benefits of that shift, quickly becoming a go-to resource for AI researchers and a must-have for any cloud service provider. Microsoft and Google will likely follow suit and offer Nvidia’s v100 chips in due course; Google has also been working on its own AI-friendly chip design called Cloud TPU.

AWS said the new P3 instances will provide 14 times as much processing power as the current P2 instances can deliver to AI customers. This saves a lot of time for researchers training AI models on huge data sets, which can sometimes take days to process.