CPU vs GPU in Oracle Cloud

If you read my blog post called “Optimizing TensorFlow for CPU“, you learned that you can improve TensorFlow for CPU by just choosing the correct distribution, in this case the Anaconda distribution.

CPU instances will do the work for simple AI projects, but if you need more computing power to reduce the execution or training time of your project, you need to use GPU instances.

Since many people have asked me to run the same test using GPU instances, in this post you will see the results of this test!

For this test, I used Oracle Cloud Infrastructure instead of the Oracle Cloud Infrastructure Classic used in the last blog post. Oracle Cloud Infrastructure is the new generation of Oracle’s IaaS, so it’s the right solution to use now.

These are the instances:

CPU Instance (VM.Standard2.2) = 2 OCPUs + 30GB memory

GPU Instance (VM.GPU3.1) = 1 GPU V100 + 90GB memory

In the CPU instance I created the Anaconda environment using:

conda create -n py36tf tensorflow python=3.6

In the GPU instance I created the Anaconda environment using:

conda create -n py36tf tensorflow-gpu python=3.6

To evaluate performance between CPU and GPU instances, I used TensorFlow to run the following matrix multiplication script:

Blogs

The views expressed on this blog are my own and do not reflect the views of the company I work (or have worked for) neither Oracle Corporation. The opinions expressed by visitors on this blog are theirs, not mine. The information in this blog is written based on personal experiences. You are free to use the information on this blog but I am not responsible and will not compensate to you if you ever happen to suffer a loss/inconvenience/damage because of/while making use of this information.