If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Blindly? Again, you didn't provide any reason for that to be the case. And I don't have such a system (the one Core 2 I do have doesn't have VT-d to begin with). What philip550c says makes sense, though, in that some motherboards have buggy firmware that doesn't actually enable the feature. But it doesn't mean that there are different versions of VT-d, some of which are not IOMMU.

I, as well as others, have provided a reason. Just because your CPU supports IOMMU, it doesn't mean the chipset or motherboard does. Just because your motherboard supports it, it doesn't mean the CPU does. Think about it like this - if you have a chipset with support for 3 PCI-e 16x lanes but you have a motherboard with only 1 slot, does that mean you can do crossfire/SLi? The answer is no, you can't. IOMMU is the same way.

Note that on most boards with IOMMU support, it is a BIOS option. If your i5 BIOS does not explicitly say it has an option to enable/disable IOMMU or VT-d, you can't do GPU passthrough.

Comment

That's exactly what I was saying. Yes, both the CPU and the motherboard has to support VT-d, and it has to be enabled in the firmware. And the firmware must not be broken. That is definitely true. But if these conditions are satisfied, then there shouldn't be any further problems with it.

Comment

Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:https://bbs.archlinux.org/viewtopic.php?id=162768

Comment

Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:https://bbs.archlinux.org/viewtopic.php?id=162768

The passed-through GPU's output is available when you plug in a monitor to one of its output ports so there is no need for explicit support from the host or guest OSes. In fact, if you don't have a secondary monitor to plug in to the passed-through GPU, it is difficult to get any output from the VM.

This kind of setup works best for desktop computers where each GPU has at least 1 dedicated output, but it doesn't lean itself well to laptop. I have a Clevo P150EM laptop with i7 3720QM and AMD 7970M. VGA passthrough worked fine and I was able to run high-end demos in the Windows VM, but I had to access the Windows VM via remote desktop (Splashtop in this case) so as you can already guess: latency is an issue for resolutions higher than 720p.

I've been thinking about it a lot, to the point of doing some hardware hackery to rewire the AMD GPU's output to one of the outputs currently wired to Intel GPU, but it's really risky.

Comment

I have to concede here. On closer inspection VT-d is more common that I though. But the later posts show that it's not so simple. I still maintain that it's very hard to pick a CPU-MB combo and be reasonably sure in advance that it will work. And I wouldn't trust it on a non-Xeon setup. Even on Xeon I wonder if there's any difference between the E3, E5 and E7 lines WRT VT-d feature set.

All the Xeon setups I have (E3's, 55xx's, 56xx's and E5's) have full VT-d support - at least all the ones using Intel boards.
Keep in mind that only physically tested it on a 5680 machine.

Comment

True. At least as far as I know, large page support isn't required in-order to get a second GPU attached to a VM.

I wonder if that prevents the host kernel from using large pages. I would have though that's particularly good fit for virtualization. I think the current Linux kernel can use large pages automatically, but maybe this prevents it. Does anyone know if that would prevent large pages only for the virtual machines or it does it for software running on the host as well? Sure, it's just a performance optimization but still...

Comment

Hmm, I did a quick test in VirtualBox (which also has PCI passthrough, though it's probably not meant to be used as VGA passthrough just yet). The VM did detect my card correctly, although couldn't start it saying that there was no monitor attached. Which is fair enough, because it wasn't attached (I only have one monitor here), but even if it was, it probably wouldn't change a whole lot, since it still wouldn't be outputting to the other card.

That's the confusing part for me. How is it supposed to draw things? Output the entire VM window to the dedicated card and be viewable when something is plugged in there, or be something like Bumblebee and just use the dedicated card for processing, and the integrated one for displaying things? Do I need the NVIDIA module to be loaded on the host or not?

EDIT: Looks like qemu figures out that it has to output things to the card it owns, and the host card is shown a black screen. At least according to this, which is another nice guide on how to do it, and is more recent:https://bbs.archlinux.org/viewtopic.php?id=162768

It wouldn't work anyway even if you did have a monitor attached - virtualbox has no GART support, which is supposedly ridiculously complicated to passthrough.

AFAIK, a virtual display in qemu is optional, but many OSes allow you to use more than 1 GPU to render screens, even if only 1 is accelerated. With the virtual display on, it would basically be like a dual-monitor setup. If you were to virtualize Windows with a discrete GPU, you could still disable the virtual GPU in device manager, and set the discrete GPU to be the primary. When you passthrough a GPU, that GPU is, in a way, "exiled" from the host system. That being said, in the guests's perspective, it isn't virtual, and therefore can and must be used as a regular GPU.

To me, the most ideal purpose of GPU passthrough is multi-seat. I'm not aware of being able to use the rendering power of the discrete GPU with the virtual display, but I'd like to be proven wrong.

Comment

I'm not aware of being able to use the rendering power of the discrete GPU with the virtual display, but I'd like to be proven wrong.

That would be great. I would really like to have a setup where I have a server hidden in the closet that runs a virtual machine for my desktop and have full GPU acceleration while being accessed remotely. One can dream...