eGPU and multi-screen setup

Question: Is is possible to have the internal GPU and external GPU drive separate screens? If so, how?

Hello folks, I'm currently using High Sierra 10.13.2 and 10.13.4 beta 2 on different machines with an external screen, both with eGPU (the former using nVidia 1080 Ti and the latter using Vega Frontier Liquid). In all cases it looks like the eGPU drives both the internal screen AND the external screen, leaving the internal GPU sitting idle. I've observed this using iStat Menus, the GPU of the internal screen doesn't break idle whereas the eGPU is visibly working when I move windows around the screen or launch an application.

Using Unigine Heaven benchmarks to look at performance I tried changing the primary monitor from internal <> external, and also moved and made the benchmark full screen when launched from the different primary screen. The benchmarks are pretty much the same no matter which is selected as primary or where the benchmark app is launched. On the external screen the performance is much better than on the internal screen.

In "About This Mac" it only shows my eGPU in the list of graphics adapters. I can't remember if this has always been the case, I remember setting up Sierra some time back, and I seem to recall that both GPUs are listed (but I could be wrong, I know I've seen it somewhere).

Does anyone have a clue if the internal GPU can be activated? My internal GPU happens to be a good one, and so it's wasting away. If my eGPU only had to drive the external screen then it too would exhibit higher performance.

Just as a side note: graphics card != GPU. A graphics card (internal or external, no difference) has a GPU and has display outputs, but these are actually not "welded" together, the GPU outputs can be routed (via the PCIe bus) to any display output in the system. This is why eGPUs can render desktops or even app windows on any display output - that's why we can have USB-C monitors, and in fact, on many laptops and AIOs, the display output is actually that of the Intel GPU, your laptop GPU being routed there.

The important consequence - it's effectively never as simple as "stuff on display 1 rendered by card 1, stuff on display 2 rendered by card 2".

It gets even more convoluted (what happens when you move an app from display 1 to display 2 - as you noticed, can't just hand off an app mid-run from one GPU to another), or, say, if one GPU is an AMD and the other is an NVidia. Long story short - even though *technically* it's possible to have multiple cards running in parallel and powering different displays and apps, it's messy business and will depend on driver/application support as well how those apps can handle (multi-)GPU selection.

The real question is do you *really* have two bandwidth-intensive apps that *need* to run simultaneously on both displays? The TB3 bandwidth will be consumed only if there actually something going on that needs to be displayed somewhere reachable via the TB3 link.

I have my MacBook Pro mid 2014 running HS 10.3.2 and I am using the aorus gaming box 1080 through thunderbolt 2/3 adapter and a thunderbolt 2 cable all originals by apple.

everything runs fine on external display when I set it as primary monitor (its attached with DisplayPort cable)
if I try to run anything at all at the internal monitor it goes all glitchy.
I had bought a HDMI headless hopping that with if I could accelerate both screens, but seems that I can't.
not even accelerate the internal properly !

do you know any setup that can make me accomplish this? or at least make the internal on run on the gnu or the macbook instead of the external gpu?

@4chip4, thanks for correcting my terminology. Luckily you knew what I meant, even if it was simplistic 😉

In answer to your quesiton about do I need to have multiple bandwidth-intensive apps on multiple screens. Actually I do yes. It's not a major problem right now because the hardware I have is powerful by today's average consumer standards, but it will go out of date and it would be nice to make good use of all of the kit. It would be nicer to be able to choose what stuff runs where.

I guess my main issue is that I'm not able to use my system to its max and I'd like to be able to, should I have the need or desire to do so.

I'm no Mac expert, so I can't help unfortunately with details - even on Windows where eGPUs have been officially supported for some time now, a lot of apps have a hard time understanding that there are multiple (discrete) GPUs that can (and should) be used. You can keep all your GPUs active, but apps/the OS *will* get confused.

The only ones I've seen reliably being able to work in multi-(discrete-)GPU setups are the various mining applications, so it's certainly possible, it's just that the drivers/software need to evolve GPU selection becoming a thing. As said, it's not an inherent limitation, but as eGPUs are still an early-adopter thing, it's hard to tell if such approaches will eventually become mainstream or not.

Finally - Thunderbolt, as fast as it is today, is not an endless well of free bandwidth. The "max" performance will be defined by all the individual components in there, and if you bottleneck the TB connection, well, then that's the max 🙂

In macOS, I see the "Automatic Graphics Switching" checkbox evolves into a preference pane by itself. Further software optimization will hopefully allow us to select power preference of the entire system as well as assignment of a particular GPU to individual apps. This is the direction Windows 10 is going [preview of Redstone 4 @ 2:52].

One (?) of the problems there is that complexity explodes quickly - unlike with classic computers, eGPU availability is transient, which opens up a whole new can of worms (how to fallback, sleep/resume etc). Things are moving in the right direction (hopefully with MacOS as well), it's just that there is still a long way to go until it's all truly plug and play.

AFAIK, "preferred graphics" is basically operating system's variable that an app can follow or not. In Nvidia Optimus technology, there is a variable NvOptimusEnablement.

Due to the nature of how computer programs work, it is the responsibility of the app maker to ensure that eGPU is selectable or automatically selected in high-power mode. Apple has advised eGPU app developers how to do this (only with AMD cards). If they don't do this, some games and professional apps will remain stuck using the iGPU or dGPU.

The same applies to Windows:

"Applications are always allowed to have the ultimate choice of which GPU to use, so you may see additional applications that do not follow the preferences you set. In that case, look for a setting within the application itself to choose a preference."

The issue is that it's still going to be a long while until *apps* do the right thing - we simply have so many legacy apps. This is one of those cases where some apps might be too smart for their own good. If the app just uses the system allocated, default (or first) GPU, all is good as seen from the video (this is why I expect Universal apps to fare slightly better). This is what the NVidia selector is doing as well (altering the order of GPUs as exposed to the app). The issue is with legacy Win32 apps and games which are smart enough to know they want to run on a dGPU, and proactively (try) to do the switching, but lack understanding of the concept of *multiple* (non-SLI) dGPUs, so either don't see one of the dGPUs, or, even worse, just keel over when exposed to "too many" GPUs.