Coffee is Refreshing, if it is authentic

Geekbench leaks are pretty common, and often at least a wee bit photoshopped so don't take the performance numbers leaked out as fact but there is a slight possibility they are based in reality. The i9-9900K is shown with a score of 6248 points in single core and 33037 in multicore, while the i7 part reached 6297 and 30152 respectively. The logic holds somewhat as the i7-9700K is 100MHz slower than the i9 but has half the threads as it is not multi-threaded. These numbers are higher than Ryzen 2 in single threaded performance but fall behind on multi-threaded apps; as has been historically the case.

"Geekbench 4 benchmarks for three next-generation Core processors popped up early according to the folks at WCCFtech, with the chips so far look to be mild refreshes on the current Coffee Lake S generation of processors."

Video News

"These numbers are higher than Ryzen 2 in single threaded performance but fall behind on multi-threaded apps; as has been historically the case."

Some more analysis here is needed, instead of just stating that this is expected -- it isn't. AMD has been ahead in multi-core because their CPUs historically have had more cores. If Intel is still behind even with an 8 core/16 thread CPU that even has an IPC and clock speed advantage -- that is massive and needs further study.

IPC is not everything as gaming appears to be very affected by latency of any kind. Just look at different DDR4 DRAM memory timings and how that can affect some games greatly.

Intel is still ahead in IPC but not by as much as Intel has been in the past. Even latency on a ring bus enabled processor is going to go up with core counts and maybe any gaming that makes use of CCX-Unit Affinity and runs the game on a single CCX Unit's 4 cores/8 threads may have just a little more latency advantage over any games that are run across 8 cores/16 threads attatched by a ring bus. Using a single CCX-Unit for the game is sure going to no be affected by any inter CCX-Unit Infinity-Fabric latency limitations.

I'm not talking about only Memory Latency here as much as responsiveness and Cache Size helps even more if the games most necessary routines can mostly reside in Cache and not be evicted from That Cache. That's where gaming software optimization can help the most, desinging the most essential gaming code to fit in L3 cache and not allow that necessary code to be evicted from the Cache.

Higher clocks help things get done faster but it's overall latency and the total number of draw calls that can be serviced that can help more for 1080p lower resoultions. But That's not going to be the case for 1440p gaming and higher with all the settings turned up and things becoming more GPU limited than CPU limited.

I'd still think that Intel will have some latency advantage and the higher clocks will allow for the work to be completed faster. But really how much can a CPU do in the time period between 16.67 and 33.33 milliseconds that gaming mostly requires per frame for 30FPS-60FPS game play.

All that draw call traffic is going on over PCIe 3.0 at that PCI protocol's added latency and some games are to this day rather poorly optimized for anyone's version of CPU to make much difference on some titles. AMD's and Intel's Respective CPU Micro-Archs are getting so close in performance for many games and it's on mostly on the GPU that the gaming becomes more dependent at 1440p and above.

And Now there is Nvidia's RTX GPU IP to bring into the gaming tests in not too many more days and that's going to take ages for the benchmarks to measure and a damn good while before the games ecosystem can take advantage of any GPU IP released in the past few years, RTX included. Look at AMD's Vega Explicit Primitive Shaders and that's taking the games makers longer to adopt with Navi expected to arrive before the gaming ecosystem begins to make use of Vega's Explicit Primitive Shaders IP that will still be there for Navi also. At least Vega owners can look forward to some Navi optimized games that make use of Explicit Primitive Shaders code being backported to Vega also.

If that leaked Nvidia whitepaper is to be taken seriously then Nvidia is some years ahead with the Ray Tracing cores and It's denoising Trained AI algorithm running on those tensor cores that make the limited Ray Tracing possible on Turing. I'm curious to see if AMD's Explicit Primitive Shaders can allow for AMD to use that IP to make Vega's Shaders do Ray Tracing Optimized processing for gaming but what about the Tensor Core/AI denoising part of that Hybrid Ray Tracing that Nvidia is doing. AMD currently has no in Hadrware Tensor Core capabilities but that's probably not going to be hard for AMD to add given the other tensor cores that many are making use of, even before Nvidia added Tensor Cores starting with its Volta GPU Micro-Arch.