There have been plenty of hardware-oriented discussion on this great forum, neither of which conclusive. So let me first explain why, and then... start another one, hoping some good souls from among fellow users and/or BMD moderators will provide me with the answers

OK, so it so happens that at the age of 63, I have to keep in mind the solid income from my main profession-related activity will cease in 2-3 years, after which I'll become semi-retired with all my energy and time devoted to digital content acquisition, editing/grading, and final product delivery. As in any art-related activity, my income will be considerably lower than it is now, when I still devote myself to a more stable profession in the area of engineering - but first of all, even if I'm lucky enough and manage to make good money on this currently just the second profession (or should I say passion or "serious hobby") - it will never be coming regularly as it does now Therefore, I've started to seriously consider leasing a high-end and as future-proof as only possible, workstation; if I start soon it will be paid-off completely by those 2 years or so...

At the moment, what I'm considering most seriously is a Supermicro Superserver 7048GR-TR. In the current DaVinci Resolve Configuration Guide 2x 12 cores are recommended as a minimum; I don't think I can afford that many cores - 2x 8 or even just 2x 6 cores is more realistic; if I'm doing fine I'll always be able to replace those modest Xeons with more powerful ones without a need for a whole new machine (which is the beauty of the dual-CPU server architecture). But I must admit I have no idea (neither is it mentioned in the Guide) how many GPUs to go with in order to get the best possible balance between the GPU vs. CPU performance... And this exactly is the main question for you guys (I'm sure more will come, but lets start with this most important ones as it's my experience eliminating bottlenecks is probably the one most important thing).

So - assuming my starting configuration would only have 16 CPU cores in total - how many Titan Xps cards should I start with? The motherboard in those servers have 4 PCI-E 3.0 x16 (double-width) slots, so - when (and if) I'm ever able to increase the total numbers of cores to say 20, I'll also be able to do so in the most cost-effective way, i.e. by simply adding one or two more identical GPU cards - but:

Just to make it easier for those willing to advise: my projects are mostly UHD (or DCI 4K) @50p (sometimes 25p); the source media - XAVC-I from my FS7 and Long-GOP from my GH5 (often replaced by the highest quality DNxHR or Prores when I'm able to use the GH5 with the Shogun Inferno). I grade for HDR10, and my grades and edits are not particularly heavy (at the moment - I bet that in the future, I will start using much more "sophisticated" grades)... Please share your thoughts; TIA

With budget considerations I would consider a 1950X Threadripper instead of using expensive Xeons.

I process HDR10 4k@60 also with the Shogun Inferno and use proxies when necessary if I have to do heavy editing. I have a single 1080ti and whith any grain or other processing the GPU is typically the bottleneck. But with a 1950X Threadripper and 2 1080ti's I would think you come a long way.

I think an i9-7960X or Threadripper 1950X build would be more cost effective.

I have a 1950X with 2x 1080ti, and I've only ever been bottlenecked by 6K/8K RED samples (CPU), and sufficient TNR (or other GPU effects) on 4K timelines. If you are curious about performance, I am happy to test sample footage/project playback.

OK, I guess you are right - switching from 2x 8-core Xeons to the 16-core i9 But the question remains - for the best CPU/GPU balance, will 2x Titan Xp be OK? Or will 2x 1080ti be sufficient?

I'll build this from scratch, and performance is more important then price...In my current PC, my 8-core i7 worked great with 2x 1080 (neither CPU not GPU create any bottleneck). Would like to achieve the same with the new, faster build.

With the information about DR14 being optimized for a single GPU, perhaps a single Titan Xp would balance the 8-core i9 nicely?

Piotr Wozniacki wrote:With the information about DR14 being optimized for a single GPU, perhaps a single Titan Xp would balance the 8-core i9 nicely?

Since you are working regularly with UHD material, I recommend the second GPU. Because graphics card prices are so ridiculous now the Titan Xp isn't much more than the 1080ti, so I would just go that direction (times two).

Building my i9-7960X 16-core system, should I definitely go for 2x Titan Xp or will 2x GTX 1080ti performance be practically the same?

Piotr

TitanXp should be faster (more CUDA cores and faster memory). I don't think you will hit CPU bottleneck with your files type.Maybe on some exports CPU bottleneck will show up.Will you really see difference in real life with 2xTi or Xp, I don't think.It's maybe more question of 1x Xp or 2xTi. Then it's more interesting question from performance and price wise point.

I do not know if you can wait a few months.nVidea introduced the GTX 1080 and GTX 1070 in the spring of 2016. The rumers say the next generation will be introduced very soon. My guess is it will be called GTX 2080 etc. You can try to Google it. Samsung started full production of GDDR6 VRam for some time ago. I be live it must be for the new Graphics Cards.

AMD has officiel announced new CPU's produced in the 12nm process. The next generation of the Ryzen will come to April. Here is a link to an unofficial Multi Core Score of 20102. That is an increase of 31% compared to the current generation.

As I didn't want to wait till the prices of mainstream graphics card drop, I manged to sell both my GTX 1080 and replaced them with a single Titan Xp.

Watching the Windows Task Manager performance graphs for my 8-core CPU and my new GPU, I can see the CPU has probably become sort of a bottleneck now - with (depending of what Resolve's doing at the moment) the CPU fully taxed, while the GPU maxim load I saw has always been below 30% (when caching a timeline with some FXes and NR, both previous GPU were taxed close to the maximum, up to 99%).

Does it translate to a better overall working experience? Difficult to answer yet; must use the configuration for some longer time and though different scenarios to tell. But do you guys think I should replace my 8-core i7 with the 16-core i9 - or save some money and go for the 14-core one? Not only is it cheaper, but the base frequency is higher at 3.1GHz vs. just 2.8GHz with the 16-core one. I'm aware I'd need to replace my x99 mobo as well, so the CPU update will have to wait till I find a buyer for at least the "core" of my current PC (the X99 mobo+RAM+CPU+CPU cooler)

The danger with 'enthusiastic hobbyists', especially those with a technical ability, is they constantly spend money and keep upgrading. I have one or two recently retired associates who fall into this category. Given that your current setup seems fairly adequate, my advice would be to generate some income from your hobby before you go spending even more money. Constant upgrading can be a slippery slope...

Craig Marshall wrote:The danger with 'enthusiastic hobbyists', especially those with a technical ability, is they constantly spend money and keep upgrading. I have one or two recently retired associates who fall into this category. Given that your current setup seems fairly adequate, my advice would be to generate some income from your hobby before you go spending even more money. Constant upgrading can be a slippery slope...

You are so right, Craig... But the Titan screams for more CPU juice - and considering I will retire from my main job soon - if I don't do it now, I may as well not be able to ever do it.

Craig Marshall wrote:The danger with 'enthusiastic hobbyists', especially those with a technical ability, is they constantly spend money and keep upgrading. I have one or two recently retired associates who fall into this category. Given that your current setup seems fairly adequate, my advice would be to generate some income from your hobby before you go spending even more money. Constant upgrading can be a slippery slope...

You are so right, Craig... But the Titan screams for more CPU juice - and considering I will retire from my main job soon - if I don't do it now, I may as well not be able to ever do it.

Sounds familiar. I've got the previous generation Supermicro and am constantly comparing to new hardware. Which is faster and cheaper. But I already have this and it's super stable, and I actually use the IPMI to control the system remotely sometimes. It's extremely robust. But...

If I would get rid of this system now I'd definately go the enthousiast hardware route. Although I like my Xeons and buckets of ECC ram, I don't really think it's helping me THAT much. Speed is more valuable in everyday operations and high performance parts are pretty reliable.

In conclusion though, I won't be replacing my hardware any time soon. I feel I might look into it again when the next gen of CPU and GPU hardware arrives, but chances are I'll skip another step and go for whatever comes next, unless I'm truly rocked by the difference OR the prices of current gen drop significantly.

Piotr, I retired 16 years ago also from a tech background. With that in mind I also knew that technology has a habit of doubling performance at half the price quicker than one can use it !!! If you do not need the performance to edit today do not buy as there is a likelihood that even next week something will come out to make the purchase seem a little silly. Looking at utilization is not measuring if it is good enough to work with. A bit like saying a need a faster car when one knows that the slowest one can get you a speeding ticket !!!

My main editor is EDIUS 8.5WG and also learning Resolve. Present system is i7 4790K with a MSI 11G 1080Ti also Intensity Pro 4K card that of course will work with both NLE's and also Vegas 15 that I have too. For both EDIUS and Resolve this system is happy to edit my GH5 UHD 60P files in either a 3840x2160 or a 1920 x1080 project as I can set up both to manage the preview resolution to get what I want. Since I made my 4790K system I have looked at all the possible upgrades and just stayed with it as the true gains for my use ( mainly theatre shows ) can be managed with little effort. I will likely get a Threadripper system later in the year when the 12nm parts come out for a number of reasons. AMD has said they will stay with the TR4 socket for a while so this should still be usable when the 7nm parts comes out in a couple of years too. Staying with a socket is not something that Intel have a good record of keeping !!!

If your present system works for you I would stay with it. By next year you will likely be able to get twice the performance for a lot less than it would cost you today.

Piotr. I did not intend to call you silly at all. I share your desire for fast things and was just pointing out that getting more than you actually need at any time is likely not beneficial. Certainly expecting to get a system that will last a while for sure will not be the case with the rate of change of technology. Next year for sure will bring the possibility of a faster and lower cost system. As the Pugetsystem test shows the CPU is of less importance in most cases and having a system with lots of PCI lanes for both GPU and other cards may be more important. This is where the Threadripper has an advantage over some of the Intel parts in that just about all the Threadripper motherboards have a lot more PCI lanes.

I edit multicam from my GH5 UHD60P with 4 other HD cameras in EDIUS 8.5 WG in a 1920x1080 project with no problems on my 4790K system. If they were all UHD yes that would be a problem !!!

My current thought is a Threadripper system and an extra 1080Ti card to the one I already have in my present system.

EDIT: The 4790K does have an advantage in that it has QS so the decode on the EDIUS timeline is accelerated and worth while. Interestingly this worked together with the 1080Ti for Resolve 12 but does not do so for Resolve 14 which insists on just using the 1080Ti and I had to rearrange the way my monitors were connected to use when I upgraded to Resolve Studio 14

As you can see, I have already corrected my original plans of upgrading to a dual-Xenon workstation, and - after having replaced my 2x GTX 1080 with 1x Titan Xp, and seeing how my current 8-core CPU is now taxed in comparison to this mighty GPU - am trying to upgrade the 8-core i7 to 14 or 16-core i9,

My questions to you are currently following:

- which of the two i9 mentioned will match my single Titan Xp better- as - in order to accommodate the latest generation i9 CPU - I'll also need to replace my X99 mobo with the X299 one, should I buy a model with enough PCIe lanes/slots for accommodating another Titan Xp in the future, or will my current, single Titan Xp always be enough (i.e. even with the 16-core i9 CPU)?

I would go for i9-7940X and overclock it (specially when you're not planing to work with RED footage).Running 14 cores at high clock is going to be more effective than 18 cores at much lower clock. There should be quite a lot of headroom for 14 cores to run at high clock all the time.

Yeah... One can really get lost, with all this contradictory information!

I've heard "for the fact" thgat with Resolve 14.3, a single strong GPU like the Titan Xp is the best solution, but they (BMD) never said anything about the CPU/number of cores. Indeed - after replacing my 2x GTX 1080 with a single Titan Xp, I can see it never is taxed higher than some 30% (with my 8-core i7 CPU), but frankly the overall experience doesn't seem to be much faster than with my old GPUs... This is why I want to "balance" my CPU and GPU better, and think the 14 or 16 core might just create that sweet spot. But would it then call for another Titan Xp? It'd better not, because my old problem of lacking PCIe lanes would appear again...For that reason, I'd prefer to stay with just a single Titan Xp, and upgrade to the "sweet spot" number of CPU cores only...

Piotr

PS. Margus has just responded to my inquiry; from what he says a single Titan Xp would balance nicely with the 14-core i9. I guess this is the way to go for me (unless some new information/benchmark surfaces again )...

You're thinking to much and overdoing it.Build machine which works for your needs not the fastest money can buy as in 1 year things will be already very different. At some point you add eg. another 2K Euro to gain 5% performance, which is pure waste of money.

I don't think you need faster machine than i9 14+ cores with single Titan.When you work in Resolve you want things to be realtime. It's only when you export CPU+GPU power can be used at maximum. If you can already export 4K 60p projects in realtime I would say this is good and not much more is needed.

all the above are comparable when working with a surface and clients in the room, there's no issues with any of them, same amount of work gets done in the same amount of time - the diffrences are mainly in render speed once the cleints leave, and normaly we do not deliver the same day as the session anyway, so rendering R3D's to DPX at 100Fps or at 60Fps makes next to zero diffrence

then there's the second lot, in decending order, in my experience, none of these would be a good choice for a client attended session @ 4k, but useable at 2k/HD7940x / 1 x 1080z800; 2x 5690 / 1x1080trashcan; 12cores / 2xd700t5500; 2x 5580 / 1x1050Ti

End game for me is to use machines that run a session transparently, beyond that, i'd need to see what the return on the investment could possiably be.... a z820 with 2x 8 core cpu's is the bar to pass over for the work i do, anything beyond that has diminishing returns, or perhaps zero returns beyond bragging rights

Very interesting discussion you guys are making of this little thread of mine. But please do some search what my demands from the hardware are when editing my 6--10 camera music video, and you'll understand it's not just the rendering/exporting phase which is important here

Well - it's true they said it in a way which might be understood in any of the two ways. Hopefully, one of our BMD friends will chime in and elaborate.

Anyway - just in case the compute GPUs still do scale the way the have always did - I decided to go with an X299 motherboard which will let me add another Titan Xp without the need of relegating my double-width Decklink card outside the case... It's always better to have options than to exclude them for a couple hundred dollar price difference.

well, long LOOOOOONNNNGGGGG before i'd build a machine to offline edit six streams of UHD in a UHD timeline, i'd go to a proxy based off-line style workflow in HD... but i live in a world that splits creative editing and finishing for the most part

so really.... six streams of long GOP UHD in a UHD timeline for creaive story telling?

Why on earth would one do that?

HDR10 or any HDR flavor for that matter makes zero diffrence to the machine

anyway i live in a world where i grade and finish a single stream of raw mainly, with some EXR and DPX film scans

HaHa - no, I'm most certainly NOT going to cut multicamera timeline with UHD clips in an L-GoP format. But even having created HD optimized media with some intra-frame codec, because of this a/v sync millisecond precision demand, I need a zero-drop-frame playback capability of all those tracks, comprising my MC timeline.