i kind of expected that answer, would have been to nice coming to the second part of the question, in case i will have the full version, do the GPU´s need to be the same in case i have multiple or can i run two different GPU´s in parallel, lets say a 770gtx and a Titan

Whilst we're on the subject of GPUs, my current Win 7 setup with Resolve v11 lite uses the mobo's internal Intel 4000 GPU to power the GUI monitor and a scopes monitor whilst my reference monitor is driven directly from a Decklink SDI 4K PCIe card. My main GTX 970 GPU is used only for Resolve processing. (and occasionally running the reference monitor's Display Port)

Theory behind the current arrangement is that using the Intel 4000 for GUI/scopes monitoring takes stress of the 970 leaving it for Resolve GPU.

The 970 is capable of running four or more monitors so with a Resolve 12.2 install on Win 10 x64 Pro, should I keep the same monitor arrangement or disable the internal motherboard GPU and run all monitors from the GTX 970?

It was the setup I have been using successfully for a year or so with the Asus z87 Expert motherboard. I had initially installed a Quadro K600 so wanted to keep as much of it free for GPU processing but I currently I have a GTX 970 running successfully under the same setup.

However, now that v12 beta 2 is out (and should support my OxygenTec panel) I plan to install Win 10 and rearrange the system for v12 so I'll connect GUI and Scopes monitors to the 970 and report back after the changeover.

I say generally, because there have been a few (unexplained) exceptions to this.

Generally, OSX won't see more than 2 NVIDIA cards. And in the case where it might, Resolve generally won't see more than 2 GPUs. But because it's not an absolute thing, all we can suggest is to try it and see.

If you have more than 2 NVIDIA cards, check in the OSX System Report under Graphics/Displays and see if it properly identifies all the NVIDIA cards. If so, then check in Resolve Preferences - System Overview, and see if it sees all the GPUs.

Again, this was an OSX behavior change from Mountain Lion. In Mountain Lion, all NVIDIA cards are always seen.

John Burton wrote:Interesting, I know you've heard it a hundred times but I, and many others, would pay for an prorez option under windows. It's the only reason I run osx.

Thanks

I know it's a workaround but many people are not aware that you can create ProRes on a PC with right software. In my case, when a client requires a ProRes file or sub-master, I render out a DPX image sequence and use ClipToolz Convert v2.0 to make the ProRes file in either HD or 4K, 4:2:2 or 4:4:4. Also useful for transcoding compressed 4:2:0 media from toy cameras to ProRes or DNxHD with timecode and reel numbers.

Most current work scenarios don't need more than 12G of VRAM today. So if I had a choice of 3 Pascal Titan-X 12G cards (or even 3 old style Titan-X), or an M6000 24GB card, the 3 Titan-X will have much better peak performance.

Hello Dwaine!The info I found is a little bit controversial, so, can you clarify:Is DaVinci Resolve (the free one) can use two video cards under Windows - ? I understand that it can use clusters for grading, but in some source I read that it can use one graphic card for interface and another (only one another) for calculations. Is it true?

I'm still debating over which GPU card to add to my PC (see the current config in my sig, below). Don't have enough free slots for 3 double-width cards, so I wonder what happens when - rather than sticking to the recommended "all compute GPUs equal" scenario - I install the Titan X (Pascal) along my current GTX 1080, and set the Titan to also be used for UI... What I'm speculating is that the VRAM for compute being capped at the 8 GB level (the amount on GTX 1080), both the power and VRAM surplus of the Titan would be utilized for UI display. Is this a correct assumption? Thanks,

Piotr

PS. Of course, the reason for contemplating taking this path is that when time comes for next upgrade down the road, it will be cheaper to upgrade one GTX 1080 to to the Titan, than if I bought another 1080 now...

I would expect that to work OK, but total VRAM will be limited to the 8GB on the 1080.

The GUI places little burden on a card like that. But having the extra performance on the Titan-X available probably offsets what little burden there is. The extra VRAM on the card would not enter the equation.

Dwaine Maggart wrote:I would expect that to work OK, but total VRAM will be limited to the 8GB on the 1080.

The GUI places little burden on a card like that. But having the extra performance on the Titan-X available probably offsets what little burden there is. The extra VRAM on the card would not enter the equation.

Yes Dwaine - I'm fully aware I'll be limited to the smaller VRAM amount of the 1080. What I'm uncertain about is whether also the overall processing speed will be capped by that of the weaker link, i.e. the 1080 (thus being at most 2x processing speed of GTX 1080). But this is something only the code developers could answer authoritatively, I guess...

Generally on Mac, 3 high end GPU's is the limit. But some people seem to be able to have more. It depends somewhat on what OS version and what type of cards are used. Generally, on El Cap or Sierra, if you go over the limit, the system won't boot. If it boots, it's probably going to see all the cards. You can verify in About This Mac - System Report - Graphics/Displays. If all the GPUs are properly identified, then Resolve should be able to see them and use them. You can verify that in Resolve Preferences - System.

On Windows 10, we have in the past recommended a 3 GPU limit. However, I've seen people with more. Up to 7, in fact. But there could be stability issues with that many. So again, it's a "your mileage may vary" type of situation.

V14 may make the Windows situation better with multiple GPUs, but the development team has not yet provided specific guidance on that. Hopefully by V14 release time, we'll have a better idea.

Unless you're running the Asus P9x79 e-ws, I don't think you'll have enough PCI lanes to run 3 GPUs at 16x and the 4x decklink card.

Ditch the GUI GPU and stick with the two GTX 1080 TI cards. I'm running 2 in a P9x79 Pro and it utilizes both, especially with Neat Video.

Also, if you're going the hackintosh route with dual GPUs, you'll need to use x79 (obsolete), x99 (non native) or x299 (which comes out later this month and isn't supported at all). It's a pain, and unless you need ProRes deliverables, just go with Windows. Cineform works great and it's natively support with Premiere CC.

We'll be using the Gigabyte X99 Designare EX mother board, which allows me to have all three video cards running at 16x and the Blackmagic card at 4x. But I was wondering if the smaller memory on the 1030 won't hamper the available memory of the 1080Ti's for image processing, based on Dwaine's comments.

Unfortunately, 99.9% of my deliverables are ProRes, so MacOS is the best option for me. One of the advantages of a Hackintosh is that if everything fails, you will end up with a monster of a PC.

Ditch the GUI GPU and stick with the two GTX 1080 TI cards. I'm running 2 in a P9x79 Pro and it utilizes both, especially with Neat Video.

Also, if you're going the hackintosh route with dual GPUs, you'll need to use x79 (obsolete), x99 (non native) or x299 (which comes out later this month and isn't supported at all). It's a pain, and unless you need ProRes deliverables, just go with Windows. Cineform works great and it's natively support with Premiere CC.

I recently set up a hackintosh with dual 1080Ti, and with Asus Prime x299-DEL UXE motherboard.The two 1080Ti can show up in MacOS 10.12.6 and alos MacOs 10.13, but there is a problem in Davinci Resolve. I use "Standard candle"davinci project to test machine's performance. When I choose only one 1080Ti in Resolve preference to work, it can run 66 Blur Nodes to 15 fps, however, if I choose two 1080Ti to work together, it can only run 66 Blur Nodes to 12 fps..it's so weird...I don't know where it goes wrong...