Ultrabook Buyer's Guide: Best Laptops for External GPU

As we’ve gained a better understanding of Thunderbolt 3 (TB3) eGPU enclosures, the unknown rests on the performance of the Thunderbolt 3 host computer. Thin and light ultrabooks are often a top choice to pair with an external GPU. This article serves as a buyer’s guide for choosing the best ultrabook to get the most out of a TB3 eGPU. The particular setup I’m using is an early 2018 Razer Blade Stealth with the Razer Core V2. This pairing is one of the highest performing TB3 ultrabook + eGPU setups as of Q1 2018. The Razer Blade Stealth shares many fundamental components with a handful of other Thunderbolt 3 ultrabooks that we will explore below.

Intel Thunderbolt Technology website has a list of certified Thunderbolt products. You can visit this link to find Thunderbolt 3 laptops currently available on the market. The multitude of choices can be overwhelming. Yet the specifications we’re most interested in for external graphics use are not readily available. So what are the criteria you should consider when choosing the best ultrabook for external graphics pairing? Through hundreds of implementations and build guides, we’ve distinguished three key features:

ULV CPU

For this buyer’s guide, we’re focusing on the newest crop of ultrabooks with Thunderbolt 3 connectivity. They now sport the Intel 8th generation quad-core ULV processors, doubling the core count of the previous generation. The top turbo speed is dependent on both the CPU workload and the ultrabook’s cooling system. If nearing the limits of TDP and CPU junction temperature (95˚-105˚ C), the CPU will throttle down performance. Thin, light systems are more than skin deep.

With nearly identical architecture to the previous generation, the 8th generation ULV CPU offers performance improvements by running double the number of cores, often at a more efficient reduced speed to maintain limits and thereby giving greater overall workload. These ultrabooks are the first revision with quad-core processors and should be future-proof for the next few years. At this time, the i7-8550U configuration was the highest performing, readily available ULV CPU.

PCIe Lanes

Thunderbolt connection allows at most 4 PCI Express lanes between the host and the device. In a ULV CPU ultrabook, this means allocating 4 out of a maximum 12 PCIe lanes. There are several peripheral components inside a laptop that make use of these high-speed interconnect lanes. For a typical ultrabook the NVMe flash storage drive gets a x4 PCIe connection. The Wireless card and other components may use a few x1 PCIe connections. If there’s a discrete graphics card, it will consume another x4 PCIe connection. Resource allocation conflicts arise when PC manufacturers decide how best to use these 12 lanes. Due to Thunderbolt 3 connectivity being a relatively new standard, Thunderbolt 3 ports are often not top priority.

One performance hindrance is a x2 PCIe 3.0 via Thunderbolt 3 connection. This is applicable for most single TB3-port ultrabooks, the Razer Blade Stealth being the exception. The Dell XPS 13 is perhaps one of the most popular ultrabooks in the past few years. Many aim to use it with an eGPU, but it’s been plagued with only 2 lanes for its sole Thunderbolt 3 port. The good news is things are gradually changing with the emergence of Thunderbolt 3 external graphics solutions. At CES 2018 I had a discussion regarding this with Gary L., a Dell system engineer. He confirmed the latest 2018 XPS 13 9370 now provides 4 PCIe lanes for its dual Thunderbolt 3 ports.

The HWiNFO64 screen capture on the left shows PCIe configuration in the early 2018 Razer Blade Stealth. The PCI Express Root Port #5 [A1/C1] attaches to a x4 connection that connects to the Thunderbolt 3 [Alpine Ridge] controller. This controller then hosts a single Thunderbolt 3 port. We also see PCI Express Root Port #3 [A1/C1] attaches to the Killer Wireless-n/a/ac 1535 Network Adapter. Last but not least, PCI Express Root Port #9 [A1/C1] attaches to a Samsung NVMe 960 controller. The HWiNFO64 screen capture (on the right) of a Dell XPS 9360 shows its inferior x2 PCIe connection to the Thunderbolt 3 controller.

OPI Mode

An infrequently discussed feature is the On Package DMI interconnect Interface (OPI). ULV processors such as the i7-8550U use OPI because, unlike HQ or HK processors, it lacks the Direct Media Interface (DMI 3.0) to facilitate communication between the PCH and CPU. The system designers can choose to either extract the most performance or optimize energy consumption on these ULV processors. OPI 2GT/s is ideal for extended mobile use at low-power tasks, while OPI 4GT/s is excellent for high-performance applications. These two OPI modes operate at a max theoretical throughput of 20Gbps and 40Gbps respectively. When you consider Intel’s claim of 40Gbps bandwidth for Thunderbolt 3 connectivity, it makes total sense OPI 4GT/s is the more appropriate choice for eGPU use.

Unfortunately the OPI settings are not something users can change at their convenience. The settings are baked into the system firmware/BIOS. At the moment PC manufacturers do not disclose the OPI mode in their marketing information. The only way to find out is to run performance tests yourself. If the ultrabook has an NVMe storage controller, read speed should exceed 1,800 MB/s in benchmark software such as ATTO to confirm the laptop is set to OPI 4GT/s.

As seen in the AIDA64 benchmarks above, OPI 4GT/s systems can extract the most out of Thunderbolt 3 eGPU. Keep in mind that Intel caps the throughput in these eGPU enclosures at roughly 22Gbps to preserve bandwidth for DisplayPort transmission over Thunderbolt 3. As external graphics adoption and demand grows, we hope Intel and partners dedicate more resources to optimize Thunderbolt 3 performance in general and external graphics use in specific.

Best Ultrabooks

Below are the best ultrabooks with the trinity of performance specs to host an external GPU. If you have an Intel 8th gen quad-core ultrabook not in this list and can confirm x4 PCIe + OPI 4GT/s, please share your findings by posting a build guide in our forum. We’ll keep this list up-to-date with user reports.

Lenovo X1 Carbon

$2,099

Want more performance?

Intel released 8th-generation 6-core (hex) Q 45W CPUs. These come packaged in laptops with larger chassis to accommodate more substantial cooling systems, coupled with bulkier power supplies to drive them. A bigger package that packs a bigger punch. It’s definitely worth the performance advantage if you can trade portability for performance.

Older notebook options?

eGPU.io user builds listed as 32Gbps-TB3 are confirmed to offer full Thunderbolt 3 bandwidth performance. At the link below, you can select LCD, CPU and then review the System Brand and Model widget for options to consider. Proceed to review user builds with your targeted system.

Well written guide! I am excited to see the recent development with eGPUs. My next laptop will definitely be a thin and light laptop with an eGPU. Personally I hope Apple will release a 13" MBP with 4-core CPU in the near future (Or even 6-core 15") with native eGPU support.

Very informative article , but it does leave me with some questions. About the pcie lanes since there are 12 total if I was going to purchase a ultrabook 2-in-1 with a dedicated graphics card would it be better to get a SATA3 ssd instead of an NVMe(because NVME takes up x4 pcie lanes). I only ask because of the upcoming ASUS Zenbook Flip 15 (i7-8550u, 16gb ram ddr4, GTX1050) UX561UD. It gives the option of and HDD, sata3 ssd, or a pcie ssd instead. Does a sata3 ssd take up any pcie lanes? This leaves me to wonder if the advertised Thunderbolt 3 will be a full 4x pcie or it will be 2x pcie. Sorry if these are dumb questions I am completely new to this.

I have my doubts over these latest gen CPU's too, most games favour clock speed over core and I imaging the extra cores mean more heat and quicker throttling. Of course thats all just guess work from me, going to try to look up some comparison benchmarks...

Even if you have a game which mismanages cores/multithreading, Windows has a processor affinity mechanism in task manager that, err... manages this:

...and since the dual-core turbo-boost of 8th gen CPUs is actually considerably higher than 7th gen, there should be no reason why they would work not only as good as, but better than 7th gen CPUs. My experience in games and VR has been improved quite a bit by going from 7500U to 8550U. YMMV.

@laxlad The PCI lane allocation is a hardware (motherboard) thing, changing how the drive is connected will not change this, unless the laptop with the SATA drive physically doesn't have the NVMe port (and has a different motherboard, which is unlikely). Normally the non-NVMe option exists to free that port for use with another device or make the HDD cheaper, I have not yet seen a single device where those lanes were allocated to TB3.

USB C is provided by the Thunderbolt circuit as far as I know so we don't need separate lanes. Indeed the laptop you linked has two Thunderbolt 3 ports which makes 4 PCIe lanes a hard requirement. The laptop either connected the GPU over x2 or the NVMe socket only has two lanes. My bet is on the latter -- but the GPU wouldn't be bottlenecked by x2, it's only an 1050.

The truth is that x2 vs x4 only matters if someone wants to use an 1070 or faster to accelerate the internal display. Otherwise, you could pick anything. I am trying to spread this information but it's an uphill battle with articles like this.

@chx if you get to choose between a TB3 system with x2 PCIe and another with x4 PCIe, which one would you pick?

I’m in MN and there’s a lot of snow on the ground this past week. Most of the 4-5 month long winter, the roads are rather nice and dry. I don’t get to see the benefits of snow tires very often. When I do, it’s significantly better than all-season tires.

This article serves as a buying reference to pick the best ultrabook for eGPU use. It’s not reasonable to put an asterisk saying x2 PCIe does not matter unless you plan on accelerating the internal display with more powerful GPUs such as GTX 1070/RX Vega 56. The very selling point of external graphics is GPU upgradability.

There are many different considerations when picking a laptop and Thunderbolt bandwidth is only one of them and if we would tell people to not bother with that unless X then their picking would be that much easier.

This article needs a first sentence saying "if you are using an external monitor then this article doesn't apply, all of them are the same". That's all I am trying to say.

Thanks for the info guys. I didn’t clarify in my post but the reason I was concerned about the 4x pcie lanes because I plan on connecting an ssd and Ethernet connection to the egpu (mantiz Venus). from what I gather 4 lanes handle this better than 2. But if I were to just connect an egpu without any other peripherals I would be less concerned with 2 vs 4 lanes. Ownordisown in YouTube does a very good video showing the marginal difference between 2 vs 4 lanes when it comes to just using an egpu without connected peripherals.

Greetings
Very nice guide. I’m newbie here, this forum helped me a lot. Now I’m just curious why should i undervolting my i77500U (Blade Stealth) with i7500 to be able to play AC Origins on internal screen (1440p, high) on Razer Core v2 with evga 1080 sc. Is it my ultrabook CPU too weak on default setting to play it? Because on my i7 7700 HQ gaming laptop, i don’t need to close all running apps to run AC Origins with my Razer Core. Is there any gpu recommendation for UCPU, like price for performance for example? I have also 1080Ti, and I think I would sell it, because it is too bottleneck to both of my systems.

You're right it's wrong to say TDP max is 15W, but it's also misleading to say it can do sustained 25W, because that depends on the cooling architecture of the device. Most ultrabooks today will be doing 15W sustained with 25W Boost.

CPUs nowadays actually have three (manufacturer provided) TDP modes of operation. This is not just about changing TDP wattage in XTU and such, these are actually distinct limits/modes of operation, with different guaranteed frequencies (c in cTDP for configurable), for example for the 8550U:

The important part to understand is that cTDP up is not a "normal" nor dynamic mode (ie it's not Boost), but rather a "full steam ahead, engines be damned!" mode - it requires extra *core* cooling, which will not be normally present in ultrabooks. In addition, Boost and cTDP are not mutually exclusive, Boost goes *on top* of cTDP up (that's how you get to 35W+ draws).

Even those short bursts of 44W don't come for free - you can see that the CPU hits 100C within seconds. Performance-wise, that's fine, but it definitely won't help silicon (and especially battery) degradation, and makes for a nice lap-warmer 🙂

In practical terms, if having a "cTDP up" device with x4 lanes is so important, what you're saying is "I want a HQ chip" - where you get all the goodies - tons of cores, tons of PCI lanes, high minimum clocks. U chips never were about raw performance, but striking a good balance between battery life and performance. The main reason we're having this discussion is because chips like the i7-8700H have not started selling yet. Why cook a 8550U at 44W, when for the same draw (and money), the 8700H can have 6 cores at ~3Ghz under full constant load.

Even those short bursts of 44W don’t come for free – you can see that the CPU hits 100C within seconds. Performance-wise, that’s fine, but it definitely won’t help silicon (and especially battery) degradation, and makes for a nice lap-warmer

Are you saying that Intel doesn't properly design their CPUs or that lifespan will shorten if they are used this way? I'd love to see any data you have showing that designing Intel CPUs around Intel's own power/thermal limits will result in shortened design lifespan.

I am also assuming that the majority of people on eGPU.io are looking for CPU performance while attached to a eGPU, so battery isn't really used nor is it on your lap.

Please note that the new Dell XPS 13 9370 bottom runs much cooler than many of its competitors, despite using more power, so it doesn't really make for a nice lap warmer vs. its 15w peers

In practical terms, if having a “cTDP up” device with x4 lanes is so important, what you’re saying is “I want a HQ chip” – where you get all the goodies – tons of cores, tons of PCI lanes, high minimum clocks. U chips never were about raw performance, but striking a good balance between battery life and performance. The main reason we’re having this discussion is because chips like the i7-8700H have not started selling yet. Why cook a 8550U at 44W, when for the same draw (and money), the 8700H can have 6 cores at ~3Ghz under full constant load.

First off, you can't cook a 8550u, you will temp throttle. You can have a 15w TDP cpu and a 25W TDP cpu that operate at the exact same temp or have the 25W TDP cpu run even cooler/faster than the 15W TDP cpu

For example, look at the XPS 13 9360 vs the 9370. The last generation actually runs hotter despite only taking the 8550u to 15w TDP vs. the new 9370 which runs at ~25w TDP and operates at lower temps. It also runs cooler than the HP Spectre 13t and 13" x360

What I am saying is, if you had two equal choices, it is more logical for eGPU users to pick the laptop that has more performance, particularly since in gaming CPU can really improve the overall gaming experience. Unfortunately, Dell priced the XPS 13 9370 at a very high price, so the choices aren't very equal.

I don't want a HQ chip, look at the smallest form factor you can get a HQ chip... it's not 2.6 pounds.

The U is about TDP parameters. Just because the processor can work well at 25W sustained and not leak power like crazy when you are on battery life doesn't mean it is a bad choice. CPUs have evolved and we are seeing some really unique offerings.

I get what you are saying but it's pretty ridiculous to suggest that by an OEM offering a 25W solution in a 13.3" package will somehow cook the silicon when I clearly just demonstrated that 15W solutions can and do run even hotter. The key takeaway here is that you can plug it into a power source, get 20% extra performance for "free" without any loss of portability/battery life.

The last thing we want at eGPU.io is for someone to buy a 8550u that steps down to 7.5w and they are completely confused why their very expensive setup is not performing like other 8550U setups OR if someone is asking about a 8250u vs a 8550u that they understand under a heavy workload, both are going to be power and/or temp throttled anyway so it doesn't really matter what they choose. The last thing I'd want to see is a potential eGPU gamer to spend the extra money on an 8550u cpu when they will end up being temp/power throttled and they should have spent the extra 200 on a different laptop that offered better cooling or maybe a better eGPU box.

We just need to understand the capabilities of the different offerings on the market to help people select the device that will best meet their needs.

@ondert There are a few more laptops that should be good candidates just like the new LG Gram you mentioned. We’re waiting for actual implementation and confirmation that these laptops have x4 PCIe over TB3 as well as OPI 4GT/s before adding them onto the list.

Are you saying that Intel doesn’t properly design their CPUs or that lifespan will shorten if they are used this way? I’d love to see any data you have showing that designing Intel CPUs around Intel’s own power/thermal limits will result in shortened design lifespan.

Umm, yes? Intel gives you a warranty, which is based on a statistical model. Silicon ages, and the degradation rate is correlated with both temperature and number of executed cycles even if you operate within the parameters. Intel does not know or has a say what the thermals will be in an OEM device. The 100C max junction temp is not a "99C is perfectly fine under any condition" and "101C is bad". The higher you go, and harsher the thermal shocks, the higher statistical chance of silicon failure. I'm not saying this is a big problem or that many devices will break down - but it certainly doesn't help the device lifetime.

I am also assuming that the majority of people on eGPU.io are looking for CPU performance while attached to a eGPU, so battery isn’t really used nor is it on your lap.

Please note that the new Dell XPS 13 9370 bottom runs much cooler than many of its competitors, despite using more power, so it doesn’t really make for a nice lap warmer vs. its 15w peers

Obviously I was joking about lap-warming - the thermals are far more complex topic than that. Good designs spread the heat around as much as possible to reduce hotspots (which might be a conflicting goal with battery life as said above).

First off, you can’t cook a 8550u, you will temp throttle. You can have a 15w TDP cpu and a 25W TDP cpu that operate at the exact same temp or have the 25W TDP cpu run even cooler/faster than the 15W TDP cpu

The primary goal of temp throttling is just to prevent imminent failure - it doesn't help with the log term silicon and thermal wear'n'tear.

I don’t want a HQ chip, look at the smallest form factor you can get a HQ chip… it’s not 2.6 pounds.

Well, this is the part of having the cake and eating it too - there is a reason for that chip size, and it has to do with (drumroll) thermals and silicon 🙂 It's like trying to get out more horsepower of a small engine in order to fit a small chassis. There is a reason those larger engines exist.

The U is about TDP parameters. Just because the processor can work well at 25W sustained and not leak power like crazy when you are on battery life doesn’t mean it is a bad choice. CPUs have evolved and we are seeing some really unique offerings.

The U in the Intel Chips stands for Ultra-low power. I understand you want to max out performance and TDP, but that's not what the U series was originally meant for.

I get what you are saying but it’s pretty ridiculous to suggest that by an OEM offering a 25W solution in a 13.3″ package will somehow cook the silicon when I clearly just demonstrated that 15W solutions can and do run even hotter.

What I'm saying is that every OEM designs their thermal systems. Yes, you can make a good 25w design, and yes, you can make a bad 15W design. There are plenty of tradeoffs to make (thermal buffer, weight, volume, layout, battery life/size). In the example you've shown the tradeoff was that CPU temps go very high, very quick. I don't like that and think that's on the risky side. For some it's irrelevant.

The key takeaway here is that you can plug it into a power source, get 20% extra performance for “free” without any loss of portability/battery life.

The double standard is somewhat amusing. Your HP laptop runs hotter you know; according to your long winded explanation of thermodynamics, your HP laptop probably lasts only 50% as long as the Dell 9370. All of what you just said basically said the Dell's superior thermal solution and cooler operating temps make it an excellent choice 🙂

According to your link, since Dell laptop battery doesn't get as hot as the spectre x360, it will last much longer 🙂

So based on all that, you should probably switch laptops asap!!!!

In all seriousness, I personally don't think that there is any life difference between the spectre x360 and the dell xps 9370... just different design choices that lead to different performance characteristics. If you disagree, that's fine too.

@irev210 No, I *am* agreement that different design choices lead to different characteristics. What I didn't agree with is that those design limitations are random or largely irrelevant (ie that bumping wattage is "free"). There is a reason why the same chips have different thermal and power limits - and it's not that somebody just forgot to tick a box (regardless of who the manufacturer is).

Here's an example for the specific two devices you mentioned - x360s and XPSes do not have 1:1 the same target markets, it's just that for eGPU uses we need to work backwards due to the TB limits. The x360 is a different form factor and meant for a slightly different target market than a XPS 9370, as it needs to function as a tablet as well, so the thermal design IS bound to be different than a traditional laptop. In tablet mode, you have more limitations on the bottom exhaust, so you need to be able to radiate more heat topside, or the device will be constantly thermally throttled when in tablet mode. TANSTAAFL. If that's an issue, and you don't need x360 functionality... Then why x360? Even within the HP offering, you're better off with a 13t - it's cooler AND sleeker AND has the same TB3 ports.

Very helpful thank you!
Two questions,
first rather theoretical, TB3 right now supports up to 40GBps but we are only connecting 4PCIe 3rd gen lanes (4x8GBps), now would it be possible - or rather - useful to connect 5 or 6 PCIe lanes to a TB3 port? would that lower the performance drop we currently see on GPUs in cores?
now secondly a very specific question:
Is there a reason the MateBook X pro is not listed as a good device for eGPUs?
Even so mobilTechReview states that the TB3 has 4 lanes there is still controversy about that is that the reason?
Or are there issues in setting up a box with the Xpro? In OneCoolThing they mentioned having issues setting up a razer core X - can anyone confirm or explain why that is ?
Thanks for the article and any answeres i can get to the above

I am still searching for a laptop for my new setup and there are lots of questions i still have and things i would like to understand better then i do at the time, thanks to anyone who can clear some things up for me (sorry this will be a long post):

uhm just a small note before i begin, @4chip4 you above mentioned the hp 13t as a replacement for the x360, from everything i could find the hp spectre 13t costs the same (in europe!) has less battery and the screen moves less, also the flex seems to be more. the trade off is a few mm less thickness. Worth it? For most not i guess. I really like the idea of putting the ports at the back and having a thinner laptop that way, sadly the spectre 13 becomes quite large by having that and relatively thick top and bottom bezles making it notably deeper then the dell xps 13 at only 2mm less thickness, with lower performance (28 watt CPU xps 13) weaker thermals smaller battery lower buildquality and the same price tag (again i only know about EU and hp seems to have more of a premium price added then other companies) 🙁 kinda sad as i like the idea 🙁

thanks @itsage you say the dGPU model does, does that imply the other one has 4 lanes? as to my knowledge that one also only has 2 lanes ;/ (from everything i read and saw in terms of benchmarks it suggests to me that unless you go into loopback mode the loss of 2x is only about 5% higher then the loss of a 4x, is this correct?)

Tied to this question, i still have very little reliable information on the overall performance difference bettwen a laptop setup and a desktop.
here https://egpu.io/forums/mac-setup/pcie-slot-dgpu-vs-thunderbolt-3-egpu-internal-display-test/ it is suggested that the overall loss is about 20% in 1080p. Then Hardware Unboxed video paints a very different picture of loss rates from 20 ("best case scenario") to over 50%. Could someone also adress the mentioned latency issue and gove some details on this problem?
Sadly the former tested only with lower end GPUs while the later (video) doesnt really go into spec details of the setups used. itsage suggested in the mentioned egpu.io post that "benchmarks for 1080p show 15% performance drop , 1440p 8% performance drop and 4k 5% performance drop with the same CPU in the same Desktop PC"again these tests were done with not really high end gpus (980ti), does this matter or olds true for any gpu?
in this discussion https://egpu.io/forums/which-gear-should-i-buy/i7-7700hq-vs-i7-8550u/#post-38998 a comment from @4chip4 suggests (to me at least) that this lower performance drop at higher resolution is solely because of lower absolute framerates due to where the bottleneck of the CPU limits the GPU and gives diminished returns above that level. Which would imply that if high framerates are attempted no matter the resolution the drop would be higher, as bottlenecking is a thing here the CPU should be the main factpr, but in any way i can expect around 20%? Is this a correct conclusion or am i missing something?

I noticed that the benchmarks posted in the PCIe vs TB3 post vary quite drastically, more importantly it seems to have a pattern, while most tests are in the low loss range two titles stand out, Tom Clancy's GhostRecon and Shadow of Mordor consitently having the highest loss. both of those titles are heavily CPU limited and espc GhostRecon is notorious for allowing 0 play with leveling out CPU GPU usage. What confuses me is that those tests were done using comparable CPUs right, so can anyone explain why this is?

So lets talk about CPUs:itsage mentioned in the original post that it may be worth the wait for the H CPUs, now that we do have a number of good laptops with them and had a bit of time, are there any conclusions or results someone can make about using one with an eGPU? is the 8750H actually better then the 8550U? (this discussion is also here: https://egpu.io/forums/which-gear-should-i-buy/i7-7700hq-vs-i7-8550u/#post-38998 just thought the buyers guide may be a better place to discuss or update if these CPUs actually do offer an advantage.)

@4chip4 mentioned (talking mainly about VR) in this discussion about CPUs that: "going from 7500U to 8550U was definitely a big step up" and "7700HQ and while I did no benchmarks, the subjective performance felt largely on-par with the 8550U" This signals to me quite a big leap with the i7 8550U, an unusually big leap actually considering the increase in cores and ghz compared to the other chips, more on this later.

you would assume that a better cpu actually leads to better performance, but a number of reports i saw does not really suggest this, for example here https://www.ultrabookreview.com/20435-xps-15-9570-review-live/ we can see that the new xps 13 outperforms the xps 15 (in total slightly but in graphics very clearly! the physics (CPU based??) seems to barely balence it out) even thou it has the far better CPU, any suggestion on why this may be?

seemingly manufacturers dont think a 8750H chip for eGPu use is beneficial or worth the effort either, as every single laptop with a 8750H on the market also has a dGPU (isnt that odd? is it really that niche?) You could build more portable devices without the dGPU and even manage that crazy heat of the 45W series far better with 2 fans and full length heatpipes to both sides of the CPU, so why isnt this done?

Earlier last year reports of intel planning on integrating tb3 into their upcomming cpus were all over the place ( https://www.tomshardware.com/news/intel-integrates-thunderbolt-to-cpu,34501.html) and the mentioned prior article states the following: "my current theory is that where the XPS 13 9370’s TB3 signal feeds directly to the CPU, the XPS 15 9570 is using an additional daughterboard (Alpine Ridge) to provide TB3, which may induce some performance degradation in comparison." What are your thoughts on this? why can i not find any reports on such tb3 integration online but many users experiencing this kind of result in testing?

Here is another post strongly suggesting something is going on with the 8th gen CPU: https://www.reddit.com/r/Huawei/comments/8niq84/matebook_x_pro_i7_3d_mark_scores_razer_core/
the matebook is a 2x pcie laptop and also uses the weaker CPU, the performance drop is quite drastic overall and the score using a 1080TI (firestrike 12500) is similar to what we saw in the article on the XPS13 when using a 1070 eGPU (12175) so i really dont know what is going on there !?
but it gets cazier, this system still outperforms the razer which has a 7th gen HQ processor (equivalent or better then the 8550U) and certainly the razer also has 4x pcie vs the 2x of the matebook x pro.

my explanation to this is that the 8th gen U chip has some kind of optimization or integration for tb3 and is for that reason far superior (and also that you shouldnt use 1080ti s on 2x tb3 :D) about the negative hardwareUnboxed video on eGPUs i assume this to be related to the poor results he got which are so far from what others, espc itsage state.

Now lastly: 10month ago the 8550U was released, what do you think when we will see 9th gen U chips? whiskey late is the name i heared rumored, as well as a double digit improvement. Are there any other large improvements you see comming in the soon future? there is still nothing about Tb4 that i found and no mention of the integration from intel in over a year. The 875oH seems for some reason not to be superior to the 8550U for eGPUs and from what i read about higher resolutions it only depends on the framerate, this means running an 1180 on a 8550U would lead to similar performance drops on 1440p to what we see in 1080p with current 1080(tis)
As my goal is UWQHD @110+ FPS that means on current tech there is no way i could do that and i will have to wait for hopefully a good 9th gen U chip that is able to give better support for a GTX 1180

please let me know what you think about the points i made and the questions asked, please correct me if made mistakes (which i likely did, as i am not an expert on any of this just an AI student with some interest in hardware)