If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Comment

I'm not sure I'm actually getting what this commit is about.
I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?

Comment

I'm not sure I'm actually getting what this commit is about.
I've been familiar with Bumblebee, VirtualGL and Primus for quite a while, and now I'm using nvidia-prime, so I understand the difference between what Bubmblebee does and what PRIME does (the zero copy performance advantage...).
But here I don't get it... What does it do exactly ? Can it be set up as a transport option for Bumblebee (with no other, or just a little, piece of code needed) ? Is there any performance gain compared to Primus ? Does it allow switching off nvidia-card when you're not using it ?

None at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.

Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.

Comment

None at all. It doesn't claim to do anything new or in a better way. Somebody asked me why their Bumblebee integration code didn't work and showed me the hack they were using. Since as it turned out, I had very similar code in the driver for another purpose, I made that piece of code replace their hack.

Yes, this is only for those people who choose not to use PRIME. The kernel level integration with PRIME is the right approach from the performance, power and usability standpoint. And the VirtualHeads can be made driver independent by building that functionality into the Xserver (along with the external transport process) and using providers - i.e. accelerated Xvnc.

So it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?

Comment

So it allows TheBumblebeeProject-like projects to work native with intel driver if one doesn't want to use nvidia's prime or amd's solution? Am I getting that right?

Yes. It incorporates the existing code that people are currently using upstream. I expect that it will be replaced by real integration between the drivers, but since it added very little maintenance burden, and looks to be a useful tool, it looked acceptable to upstream.

Comment

Thank's for your great job with the Intel Graphics driver! My Thinkpad T430 works very well with the intel open source driver!

But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.

Thank's in advance!
nice regards
Michael

Comment

But I have a question about this new feature: Does this allow me to use the display-port connection in my notebook?
Because it is hardwired to the nvidia chip and up until now there hasn't been a way to use it on linux (afaik) with the intel driver. I interpreted the news like this could finally be possible :-) but I'm now a little bit confused because you sad that it doesn't add features which did not exist before.

What the commits to the Intel DDX do is simplify the Bumblee approach of using the binary nvidia driver to create a second X server to control the external GPU and displays, but present that as an extension to the first X server (using -intel). (Using a standard setup this should be as easy as startx & intel-virtual-output, which presumes that the X server finds both GPUs assigns :0.0 to -intel and :0.1 to -nvidia)

The alternative approach is to use -nouveau and PRIME.

Comment

So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?

Could this be mixed with the new nvidia RandR 1.4 support (can we get the nvidia card to render directly to the virtualhead, with the new zero copy option ?)

Comment

So, where do we start if we want to try this ? Assuming it lands soon in xorg-edgers, what tools do we need to get it running (bbswitch to wake up the discrete GPU, some tool to configure the virtualhead, to make sure the application openGL rendering is done by the nvidia card, then kill the virtualhead and bbswitch again to to switch off the discrete GPU) ?

By the point where I was comfortable writing this update, the process for using the outputs on the nvidia card was:
0. Install the latest Intel drivers
1. apt-get install bumblebee-nvidia
2. Modify the bumblebee configuration to enable outputs on the discrete GPU

3. Run intel-virtual-overlay [which automatically detects bumblebee and requests an Xserver for the Nvidia GPU]

Could this be mixed with the new nvidia RandR 1.4 support (can we get the Nvidia card to render directly to the virtualhead, with the new zero copy option ?)

There is nothing stopping you from trying... In theory, PRIME should be able to negotiate zero-copy support just as well through the kernel (and hopefully control the synchronisation better and so be easier to use and higher performance and integrate better into power management etc etc).