Don't you need to put 'optirun' in front of any commands you want to run with the nvidia card? For instance, lspci tells you 'unknown header type 7f' because the card is off (ie in lower power state), so if you do 'optirun lspci' you should see more useful information. If you run nvidia-settings, you also need to specify the X display to use, ie "optirun nvidia-settings -c :8".

And you shouldn't need to worry about nvidia-xconfig, just edit the config file that bumblebee is using (eg on Ubuntu it puts this in /etc/bumblebee/xorg.conf.nvidia).

Don't you need to put 'optirun' in front of any commands you want to run with the nvidia card? For instance, lspci tells you 'unknown header type 7f' because the card is off (ie in lower power state), so if you do 'optirun lspci' you should see more useful information. If you run nvidia-settings, you also need to specify the X display to use, ie "optirun nvidia-settings -c :8".

And you shouldn't need to worry about nvidia-xconfig, just edit the config file that bumblebee is using (eg on Ubuntu it puts this in /etc/bumblebee/xorg.conf.nvidia).

optirun is only needed when you wish to run some application (e.g. a game) using the dedicated GPU. nvidia-xconfig / nvidia-smi and such commands do not need optirun as they work at a lower level.

Moreover, optirun basically what it does is running whatever it is you put after "optirun" in another X server running on the dedicated GPU, and then drawing the results back to the main display. "optirun lspci" does not make sense in this scenario.

optirun is only needed when you wish to run some application (e.g. a game) using the dedicated GPU. nvidia-xconfig / nvidia-smi and such commands do not need optirun as they work at a lower level.

Moreover, optirun basically what it does is running whatever it is you put after "optirun" in another X server running on the dedicated GPU, and then drawing the results back to the main display. "optirun lspci" does not make sense in this scenario.

Perhaps, but in bumblebee you need to use optirun to enable the nvidia card and the nvidia libraries. Otherwise you're just using the intel card and the intel libraries and the nvidia card is turned off.

This is why your lspci command couldn't get any details about the nvidia card, and why "optirun lspci" makes perfect sense. For instance, on my system:

I should have mentioned it before, but lspci errors out only after I get the "fallen off the bus" error (which happens whenever I wish to use the GPU).

Here's what I get after a clean reboot, and nothing loaded (not bbswitch, not nvidia module, no nothing). Also, I can modprobe nvidia and throw an lspci afterwards and the result is the same. I also tried modprobing both nvidia an bbswitch and manually power-cycling the card, which works. Only after doing anything that actually requires use of the card (be it optirun, nvidia-xconfig, nvidia-smi, or a CUDA program), does the GPU fall off the bus and the lspci output is as displayed on my first post.

The difference in your case is that surely you have started bumblebee before running those commands, and by default, bumblebee turns off the card. During my debug sessions I have set bumblebee to not turn off the card when loaded (which, in turn, made bbswitch keep the card on when modprobing it).

CONFIG_NO_HZ:
This option enables a tickless system: timer interrupts will
only trigger on an as-needed basis both when the system is
busy and when the system is idle.
CONFIG_RCU_FAST_NO_HZ:
This option causes RCU to attempt to accelerate grace periods
in order to allow CPUs to enter dynticks-idle state more
quickly. On the other hand, this option increases the overhead
of the dynticks-idle checking, particularly on systems with
large numbers of CPUs.

The second one depends on the first. After enabling both of those, I could query my GPU. I don't know why both are needed, but I'm guessing it's something to do with the interrupts. On my main desktop machine, only the first one is set.

One more thing: after doing more testing at the request of the Bumblebee guys, I could see that IOMMU kernel configuration has an impact too. Without this option compiled in:

Code:

CONFIG_CALGARY_IOMMU:
Support for hardware IOMMUs in IBM's xSeries x366 and x460
systems. Needed to run systems with more than 3GB of memory
properly with 32-bit PCI devices that do not support DAC
(Double Address Cycle). Calgary also supports bus level
isolation, where all DMAs pass through the IOMMU. This
prevents them from going anywhere except their intended
destination. This catches hard-to-find kernel bugs and
mis-behaving drivers and devices that do not use the DMA-API
properly to set up their DMA buffers. The IOMMU can be
turned off at boot time with the iommu=off parameter.
Normally the kernel will make the right choice by itself.
If unsure, say Y.

while I would not get "has fallen off the bus" I do get the following messages (rminitcontext etc) and the card is unusable.