Picking the right Deep Learning Hardware

After some initial experiments building upon some textbook examples I figured that I need a dedicated server. My desktop is ok (i7-6700k with GTX 980) but I want to use it for other purposes as well like gaming (Dota2) and don’t want it to be restricted on when to have it running and not. Also my Windows is not ideal for TensorFlow, so I think I will buy a dedicated Linux Server with a good GPU and put it in my basement where it can run day and night to train for my experiments.

The question is: what to buy to get a good value for money. It is still just a hobby in its infants, so it should be a huge invest. On the other hand I can use it for multiple purposes, like NAS, owncloud or as a VPN server. So I searched the web and the archive of my favorite computer magazine c’t. The magazine just tested the new RTX 2070 and found it ok, but not superiorer to the GTX 1080TI which you can get for the same price. However, the webpage from Tim Dettmers educated me, that the RTX can be run on 16-bit training of Neural Networks instead of 32-bit and thereby effectively double the memory size. A strong argument, as I did already run into memory shortage with my first fingerexercises on my GTX. Conclusion of Tim and myself: “Currently, my main recommendation is to get an RTX 2070 GPU and use 16-bit training.”

On-top I decided to get a i5-9600k which is according to some benchmark 10-30% faster than my 3 year old i7-6700k. The rest is pretty standard: Samsung 970 EVO 500 GB SSD, 32 GB DDR4-3000 on a MSI Z370 Tomahawk Mainboard. Just the 800W power supply stands out with it’s reserve for further GPUs 🙂