Now that I am folding regularly, I want to take it to the next level. Flat-out warp speed folding records for a single machine.

Roughly in descending order of priority, here's what I think I want to do:

1. Take advantage of BIGADV work, if they are still available. I believe the requirement is 16+ cores.

2. Take advantage of x17 core GPU work. To support multiple slots, I will need as many economically available modern graphic cards as possible; one or two at first, up to four total in the future.

3. Possibly use this box as a backroom animation server. More cores, more cores!

4. I want to be able to run Windows on this box, so that I can support a rendering software, a SQL Server database, or even my iTunes and media library from a central location.

5. As far as I'm concerned, the folding can run in a Linux virtual machine, as long as that will not disqualify this system for BIGADV or x17 work. I could also let Windows also run as a virtual machine but only let him see a maximum of two physical processors. What is feasible, and/or doable on a software budget of $300-500?

6. I would like the new box to have 802.11 N or AC, and would like it to run without a monitor so that I can plug it in anywhere close to an Aircon vent; even the laundry room.

7. I need help putting some of this into perspective so that I can arrive at a reasonable hardware budget. At this time, I think I'm willing to spend a couple thousand just for the hardware.

8. Loose Hardware Specs:

---- 2 or 4 processor motherboard that can support at least four PCIe Graphic cards and 24 GB RAM---- Start with 2 CPUs, add more later. Would prefer to get to 16+ cores with the initial build.---- Not sure, but I think 2.4 GHZ would be the slowest clock I'd want for the folding work.---- I do not want to overclock either the system or the GPU. Stock speeds only because I want to run this server for several years.---- Start with 16 GB RAM, add more later.---- Start with 2 graphic cards, HD 78xx or 79xx or Nvidia equivalent, add more later.---- SSD for boot and application partitions, more SSD to be added later to support databases.---- At this time, not married to AMD vs. Intel (processor) or AMD vs. Nvidia (GPU), as long as it's not a fussy system and reasonably power-frugal (given the amount of work it's going to be doing, of course).---- Power supply that can handle the number of processor slots and GPUs that I have listed above.

So let's talk. I'd like to begin ordering these parts in September, so that I can assemble and stand up this system in time for our annual Frankenbot dance party. After that, I'll be on the hunt for Fuzzhead!

There are a few requirements there that will make the costs go up really fast. If you are really shooting for an all-in-one multipurpose monster, these are my thoughts.

BIF wrote:1. Take advantage of BIGADV work, if they are still available. I believe the requirement is 16+ cores.

This means 2P/4P Opterons or Xeons. For the Xeons, the jump to 2P and 4P is fairly large, and then FB-DIMMs are not the cheapest thing in the world. For cost (and other component choices), you can pick a 2P system, but there is no telling whether future BIGADV will up the core count yet again.

BIF wrote:2. Take advantage of x17 core GPU work. To support multiple slots, I will need as many economically available modern graphic cards as possible; one or two at first, up to four total in the future.

Server boards may not have that many PCIe x16 slots. I would set a more realistic expectation of 2. Do you know if the x17 GPU core can take advantage of DP instructions? If not, then you don't have to go for the Teslas, Titans will do.

BIF wrote:3. Possibly use this box as a backroom animation server. More cores, more cores!

What is the software involved and which OS does it require? This dictates whether you will be using Linux or WIndows as the host OS, or whether you need to put things on a VM. That said, animation server should probably benefit more from more cores vs higher IPC cores, meaning you can go AMD without worrying too much about falling behind the performance curve compared to the Xeons.

BIF wrote:4. I want to be able to run Windows on this box, so that I can support a rendering software, a SQL Server database, or even my iTunes and media library from a central location.

5. As far as I'm concerned, the folding can run in a Linux virtual machine, as long as that will not disqualify this system for BIGADV or x17 work. I could also let Windows also run as a virtual machine but only let him see a maximum of two physical processors. What is feasible, and/or doable on a software budget of $300-500?

Media library = storage. Server/workstation boards should give you plenty of ports and even RAID options. Not sure about iTunes, especially if you want to run this headless. If your primary rendering software on Windows can take advantage of the cores, then why limit it to be on a VM with limited cores?

If running headless and connecting via Remote Desktop, I am not sure if the GPU processing will be interrupted.

BIF wrote:6. I would like the new box to have 802.11 N or AC, and would like it to run without a monitor so that I can plug it in anywhere close to an Aircon vent; even the laundry room.

A relatively cheap PCIe x1 card can handle it, or just get a USB one.

BIF wrote:7. I need help putting some of this into perspective so that I can arrive at a reasonable hardware budget. At this time, I think I'm willing to spend a couple thousand just for the hardware.

It can bust $2k easily. May be you do need to make some compromises.

BIF wrote:8. Loose Hardware Specs:

---- 2 or 4 processor motherboard that can support at least four PCIe Graphic cards and 24 GB RAM---- Start with 2 CPUs, add more later. Would prefer to get to 16+ cores with the initial build.

For 16 cores you can get away with 2x 8-core Opterons or 4C8T (may be even 6C12T) Xeons. Going to 4P you have the following issues:- cost- you need larger than E-ATX boards- you pretty much need to get a rackmount-able chassis instead of regular case, selection will be much more limited (and they may not look nice), plus more cost

BIF wrote:---- I do not want to overclock either the system or the GPU. Stock speeds only because I want to run this server for several years.

You pretty much cannot do it with server boards.

BIF wrote:---- Start with 16 GB RAM, add more later.

BIGADV is supposed to use a lot of RAM as well, add to rendering and SQL server (how big are your databases?), I would say you should shoot for 32GiB. More costs.

If the x17 cores can take advantage of the DP instructions in Teslas, then just get a Titan or 2. I am not sure if you can find server boards with properly spaced out PCIe x16 slots for all double-width cards.

BIF wrote:---- Power supply that can handle the number of processor slots and GPUs that I have listed above.

For 2P systems, I would say 850W is minimum. If going to 4 video cards, then may be those 1KW+ PSUs make sense. However, if you want 4P and rackmount server chassis, mainstream ATX/EATX power supplies won't apply and I don't know much about those power supplies.

Basically, you are into server territory, not even sure if it makes sense to build your own.

The Model M is not for the faint of heart. You either like them or hate them.

Fox, thank you so much for your reply. You've given me a lot to consider. I need to do some homework based on what you told me. In the meantime:

1. The database won't be big (home stuff mostly for learning, but I want to be able to access it from other computers/devices, to learn this part of it).

2. The animation stuff will run under Windows.

3. I was thinking of hosting Windows and Linux both under VM (which VM is TBD). Linux because I was under the impression that BIGADV pretty much required Linux, but also to allow me to distribute CPU resources to the two VMs.

4. I will look into Titan; thanks for the tip.

5. iTunes - I want to run iTunes Server, which I think does not need a monitor; will have to explore this.

6. Yes, I plan to remote into it for control...

7. ...but..."headless" is not really a hard requirement...especially if I decide to put it in the kitchen. Then I'll probably eventually put a rubber keyboard, old monitor, and cheap mouse on it so that it could also serve as a local machine for simple web searches when I'm playing "Iron Chef" or if my hands/feet are dirty from working in the garage, etc.

I have found the summer months to be the best time to buy hardware as the demand goes down worldwide because normal people (not troglodytes like us) are spending their money on things like holidays. I have found it to be the best time to look for bargains over the years.

BIF wrote:1. The database won't be big (home stuff mostly for learning, but I want to be able to access it from other computers/devices, to learn this part of it).

Remote DB access is just basic networking. We can almost take it for granted.

BIF wrote:2. The animation stuff will run under Windows.

Can it suck up all the cores that you can throw at it? If this animation stuff is as important, then I think the simplest is to stick with Windows as the OS (if you are looking beyond 2P then you will need Windows Server). A quick search did not come up with any recent comparisons of bigadv-16 Windows vs Linux (both are supported I believe). Let's say even the Linux version is better, do you want to sacrifice the animation stuff by relegating it to run in a VM with less cores and RAM? Depends on your animation software of course.

BIF wrote:3. I was thinking of hosting Windows and Linux both under VM (which VM is TBD). Linux because I was under the impression that BIGADV pretty much required Linux, but also to allow me to distribute CPU resources to the two VMs.

I would trust the OS to handle that for you, that's why I suggest running Windows only for simplicity.

Pretty much means if you are doing 4P, you can forget GPU folding. The thread also mentioned the Xigmatek Elysium and others for E-ATX. With 4P boards you will be looking at rackmountable chassis.

Nec_V20 wrote:I have found the summer months to be the best time to buy hardware as the demand goes down worldwide because normal people (not troglodytes like us) are spending their money on things like holidays. I have found it to be the best time to look for bargains over the years.

It may be so in Europe, but in the US the best deals are usually around Thanksgiving/Christmas. Although I must say with stores like Newegg/Tigerdirect/Microcenter/Amazon, good deals that are close enough can be had all year round if you are patient and check very often. However, server-grade hardware will not be discounted so heavily so often so this may be a moot point.

The Model M is not for the faint of heart. You either like them or hate them.

From my experiences, what you're looking to do is going to require quite a bit of beef, but can be done with a dual-socket system. Should run you about $10K, depending on how much storage you want and how many (more than 1) GPUs you want. That's from a vendor. I built a wishlist on newegg for a 24 core, 2P system based on Opteron 6344s and this ASUS SSI-EEB motherboard. 64GiB of RAM, couple of 256GiB Samsung SSDs, lots of storage, and a 7970 and that is about $4k. You could swap the 6344s for 6376s and you'd get 32 cores, though that would bump the cost to $5k+. Swap the 7970 for a Titan or a K20C or K10C (which have active cooling) and you're talking more like $6k or $7k.

Edit: I should say, that I suggest a beefy system like this because it will allow you to do something fun: run a hypervisor on the bare metal, and then run Windows and Linux both on top of it; this will allow you to have the best of all worlds. However, moar costs.

To all others, responses forthcoming. But a glass of wine, some thunderstorm watching, and then dinner are first on my list this nice wet Friday, because it's cleansing and it's cooling everything off.

Oops, there's $2000 already and I haven't even picked out a case and a GPU.....

You know the funny thing is that AMD is not that much cheaper, but you do get "real" cores. So the PPD race may be a wash (higher IPC with HT cores vs lower IPC with real cores). :O2P motherboard - $430 (Pretty much you save money here)2x Opteron 6344 2.6GHz 115W TDP each - $8204x8GB ECC memory for $310 (too lazy to look for 1600 RAM, don't think it makes a difference here)Seasonic 1050W PSU - $222 (115Wx2+265Wx2 = 760W, so strictly speaking you can get away with a 900W PSU but you may be overclocking the Titan's or you want to leave some room on the efficiency curve)

A $250-300 7950 can fold 50-80k ppd with 0x17 core on v7 folding@home. This uses less than 1% cpu most of the time but the v7 software dedicate 1 core to it. If you get two 7950, you will need to beef up your power supply.My i5-3570k@4.4Ghz can only generate 10-15k ppd operating with 3 cores so the gpu dominate.

I've never done something like this before; consider buying an $800-1,000 hardware component solely for the purpose of "service donation" to a charitable or research organization (or building/maintaining a machine that will spend 99% of its time working for that purpose), but I've thought about this and I'm okay with it. Maybe I'll give it a "five year mission" or some other goal, including periodic drydock refits. By the way, I've taken to calling this new machine my "folding server", even though that is a bit of a misnomer.

I just need to budget and spread out my cash outlays.

So for now, I think I like idea of getting a dual GPU card such as a 7990 or 690 GTX for temporary use with my quad core (no hyperthreading) Q6600 and P5KC and setting this system up as a temporary server for use while saving money for the "actual server." It would have 8 GB RAM and at least a terabyte of hard drive capacity (for the non-folding stuff); enough for any immediate file-serving or database needs. In fact, it would probably do everything fairly well except for CPU-bound animation rendering, but I'm not ready to start that part of the project yet anyway.

SQL Server needs Windows, so for now it will run Windows 8 Pro and no virtualization; I just don't see how a VM would benefit me at the beginning with so few CPU cores to divvy up. Eventually, I would snap this environment off as a VHD when the time comes to run it as a VM side-by-side with a Linux VM for Bigadv CPU folding (after the upgrade to a 2P or 4P rig). Macrium Reflect can transform a backup image into a VHD when that time comes.

Question 1: For folding, how should I compare graphic cards and how do the dual-GPU cards compare to the single-GPU cards? Does folding really benefit from two GPUs on the same card, and how do they look to the FAH client...for example, does a GTX 690 appear as two folding slots, or does it look like a single "really fast" slot because it is SLI'd?

Question 2: Likewise, how would a 7990's crossfired GPUs appear to the folding software? I kind-of-sort-of would like to stick with AMD partly because an AMD card won't hogtie an entire core from my four-core Q6600.

So here's what I'm thinking to get me off the ground:

Reuse: Q6600, P5KC, and 8 GB DDR3 RAMThe P5KC pic is just a handy one I found so you can see the PCIe slot layout. The blue and black slots are the GPU slots; looks like plenty of clearance for up to two dual-width cards.

I would also reuse an old Thermaltake toughpower 1200W power supply I have in my parts bin, or I would buy a new one at least 1,000W.

Buy: 1. One AMD 7990 HD now, and maybe a second one in Late August/ Early September.Some of these are dual width.

3. UPS. I'll need one to bridge the gaps created by thunderstorms and county workers. I'll spend the money on a modern one that can support the Sandy/Ivy/Haswell low-power states. That way I won't have to buy a new one next year when I do the 2P/4P refit.

4. Side Project: I love the Asus 24" IPS screen I bought earlier this year, and I'm thinking of getting a second one or maybe even a 30" IPS screen for the office. If I do that, I could swap out an old 21" or 27" non-IPS monitor to the folding server. That way I could put the server in my kitchen dining nook area and it could be a web browser or podcast player for morning coffee news updates and the like.

Flying Fox wrote:Based on my quick search, it seems like for dual-gpu-on-a-card you just need to set up 2 GPU slots and have 2 WUs crunching, like the old multiproc days.

I think Nvidia has the edge still in Folding? By how much I don't know. You thinking of cost?

Yep, that's what I was thinking, regarding the slots.

Yes, I think Nvidia does have the edge in folding. But as I understand it, Geforce GPUs hogtie a CPU core for folding. Logic suggests that two 690's would hogtie all four cores in my Q6600, leaving nothing for a4 processing or even for media server, database, or browsing duties. And I don't know how long I may have to run this machine before I can upgrade the motherboard and CPU. Maybe only a couple months, maybe longer. Hence, my leaning toward AMD.

Flying Fox wrote:Based on my quick search, it seems like for dual-gpu-on-a-card you just need to set up 2 GPU slots and have 2 WUs crunching, like the old multiproc days.

I think Nvidia has the edge still in Folding? By how much I don't know. You thinking of cost?

Yep, that's what I was thinking, regarding the slots.

Yes, I think Nvidia does have the edge in folding. But as I understand it, Geforce GPUs hogtie a CPU core for folding. Logic suggests that two 690's would hogtie all four cores in my Q6600, leaving nothing for a4 processing or even for media server, database, or browsing duties. And I don't know how long I may have to run this machine before I can upgrade the motherboard and CPU. Maybe only a couple months, maybe longer. Hence, my leaning toward AMD.

I forgot what the drop off of points is if the CPU core is in use. The process will not hog 100% of the CPU AFAIK. So you should still be able to overlay the GPU folding processes (they are low priority anyways) with your other programs. So it will be a little slow if you run your other apps, but that's exactly what you want, right?

The Model M is not for the faint of heart. You either like them or hate them.

I recently built a 2P system out of Xeon ES CPUs bought from Ebay. I am not recommending you try it or anything, but I managed to get a 2P Sandy Bridge E system for <$1000. I use it as my build machine. So far it has been working quite well.

Flying Fox wrote:I forgot what the drop off of points is if the CPU core is in use. The process will not hog 100% of the CPU AFAIK. So you should still be able to overlay the GPU folding processes (they are low priority anyways) with your other programs. So it will be a little slow if you run your other apps, but that's exactly what you want, right?

Yes, it is...but (again, only as I understand it), AMD GPUs (with today's drivers) will leave a little bit more CPU headroom than Nvidia GPUs (with today's drivers). This would be nearly a non-issue if I were to install a couple of 8C/16T SBEs, but it is a consideration for a 4C/4T Q6600.

But I should ask: For anybody who might know, is my understanding correct?

But even if I'm right, this could all change in a moment. Five minutes after I install two $900 cards, Nvidia could release "Magic Beanstalk" drivers, making all Geforce cards "Super-Duper". The chances we take, yes?

Flying Fox wrote:I forgot what the drop off of points is if the CPU core is in use. The process will not hog 100% of the CPU AFAIK. So you should still be able to overlay the GPU folding processes (they are low priority anyways) with your other programs. So it will be a little slow if you run your other apps, but that's exactly what you want, right?

Yes, it is...but (again, only as I understand it), AMD GPUs (with today's drivers) will leave a little bit more CPU headroom than Nvidia GPUs (with today's drivers). This would be nearly a non-issue if I were to install a couple of 8C/16T SBEs, but it is a consideration for a 4C/4T Q6600.

But I should ask: For anybody who might know, is my understanding correct?

I should have said "... little slow only when you run your other apps". Most of the time the CPU cores should be available for the folding processes. When other apps are running the Folding process becomes slower, and wouldn't that be ok?

The Model M is not for the faint of heart. You either like them or hate them.

So you're saying I'd be better to dive into this folding server with Geforce? If so, what would be my best choices for um, "maximum GPU folding" on said Q6600 then with the limitation of two PCIe cards in the P5KC motherboard?

I would not say "price is no object"...but let's just say the budget is um, "upwardly flexible" since I can phase in cards over time. And of course, dual GPU cards are definitely an option as long as the P5KC motherboard can handle them and I can source a capable power source too. Eventually the cards will go into a bona-fide server (2P or 4P, assuming they fit the motherboard and enclosure).

Other thoughts: I'll have one monitor plugged in during service as a server (maybe two if it gets heavy local use too), and 5+ years from now, after the cards get retired from folding, they'll probably go into my parts bin, a gifted PC, or just be sold/gifted themselves. If I reuse them after 5 years of hot folding, I'll probably want to replace the fans, and maybe reseat the heatsinks with fresh goop too, yes?

So you're saying I'd be better to dive into this folding server with Geforce? If so, what would be my best choices for um, "maximum GPU folding" on said Q6600 then with the limitation of two PCIe cards in the P5KC motherboard?

I actually don't know, it really depends on:1. How big the dropoff in performance is compared to when the CPU core is available for the folding processes to "hog". Last time I tried GPU folding the difference is significant but I have never quantified it by measurement.2. What is the current performance delta between Nvidia vs AMD GPUs at similar grade/price? Before Stanford switched to OpenCL the difference was so big it is not even worth discussing: Folding = Nvidia if points was your thing. Now, I haven't kept up with the latest.3. What is the current price difference now?

Factoring in #1 and #2 will give you an idea of if the performance dip (or still a gain if the performance delta between Nvidia vs AMD is still large) will be worth it to allow a period of time where your non-folding apps take over the computer. #3 will give you a "per dollar" number from the result of the former.

My bottomline to you: do more research, and may be buy a card and start experimenting first. Since we are talking post-Vista Windows OS, you can mix AMD and Nvidia drivers so why not get one each, eventually? For your eventual goal of using a Linux as the host OS and to fold the Linux bigadv WUs, you may need to investigate if Linux allows mixing of GPU drivers (I would speculate the answer to be a yes?). However, if you do have this "holdover" setup built, how many years into the future will you really be going after a true 2P/4P beast? If we are looking at 2, or may be even 3-4 years, then you will be looking to replace the GPUs anyway.

The Model M is not for the faint of heart. You either like them or hate them.

Not that dominant once you go to the Folding benchmarks. Anand's review has the 7970 neck-and-neck with the 770 with explicit SP mode. So moving on to the 780 or even the 790 may be the same, I expect the Titan to overtake in that benchmark. We need more data, especially coming from users. Couldn't find any.

The Model M is not for the faint of heart. You either like them or hate them.

Flying Fox wrote: ...Not that dominant once you go to the Folding benchmarks...

Yes, we do need more data.

If it comes down to a question of degrees or fractions of folding production, then maybe I can just make my choice based on any or all of these:

1. Power consumption and heat generation under load; just pick the lowest. Or pick the model with the biggest fans.2. Price or available discount3. "Gut feeling" about upcoming drivers, OpenCL support, etc.4. Other tangible/intangible benefits, such as packaged games. For example, I could give the codes away to family members.

Flying Fox wrote:Not that dominant once you go to the Folding benchmarks.... with explicit SP mode.

SP Folding Benchmarks show the $350 7970 GE beating the $650 GTX 780. That's a win in my book, even if you take Anand's results that the 7970 is more comparable to the $400 GTX 770. But again, I don't know much about folding. I'm just looking at some benchmarking graphs.

Thanks, I remember skimming that article some weeks ago. Upon closer reading, I learned that not all 7990's are equal. Some have lower power usage. Filed for reference.

For now I think I'm back to AMD; it seems my initial hunch was probably accurate; AMD will be fine for this build, especially with a couple 7990s. I think I can build the box anytime now with cash outlays only for the case and GPU to get me started. I may even be able to order these as early as early to mid August.