With AMD's Threadripper family just a few weeks away from launch, it appears we are already getting some preliminary benchmark results in via both Geekbench and SiSoft Sandra benchmarks. This latest set of leaks isn't the first bench of the flagship 1950X, but it is the newest and thus should give us a more accurate picture of present optimizations.

Interestingly, the single core performance dropped a bit on GeekBench, from 4216 to 4074. It made up for it in multi-threading however, where the chip posted a result of 26768, up from 24723. Sadly, these numbers still pale in comparison to the 10-core i9-7900X, in both single threaded and multi-threaded figures. As the 1950X ships with significantly lower clocks compared to the i9-7900X's clocks (with boost considered, anyway), I suppose it truly will come down to whether these CPUs can close the gap via overclocking, or optimizations towards launch and beyond. Either way, it seems there may be a bit of a hill to climb to get there. Whether or not it is surmountable remains to be seen.

That said, keep in mind that even if AMD does not steal the crown, these CPUs could be a very good value (dare I say it? "Disruptive?"). That's up to AMD, but remember that any price slashes they make to compete with Intel on a value level hurts the company bottom-line. AMD probably would prefer the crown if they can have it so they can charge more, like any good business. Either way, competition never hurt the consumer, so let's hope for all our sakes the product is as "disruptive" as it possibly can be.
Sources:
Geekbench, SiSoft Sandra

(Edit - this is why you should refresh the page before you comment :P).

The SiSoftware one should be right though - the multimedia benchmark gets a huge boost from the AVX units on Intel's chips. On a side note, I'm not entirely sure AVX should be used as a benchmark comparison as it needs to be coded for (which most applications aren't, and probably won't be)

(Edit - this is why you should refresh the page before you comment :p).

The SiSoftware one should be right though - the multimedia benchmark gets a huge boost from the AVX units on Intel's chips. On a side note, I'm not entirely sure AVX should be used as a benchmark comparison as it needs to be coded for (which most applications aren't, and probably won't be)

FR@NK said:Going from 16 to 32 threads wont help much even if the overhead is only 5%. Then remember that more cores normally means lower clock speeds which hurts the "speedup" aswell.

This only applies to something that has a primary thread (or something that cannot be broken up).

This does not apply to workloads that have no main thread restrictions.

Natural Selection 2 has a primary thread that gets bogged down every easily.

Things like encryption, encoding, rendering, OCRing, zipping, and so on may or may not have this main thread issue if written correctly.

Nothing prevents OCRing from 100% loading a 1000 core CPU if the program is written correctly. Nothing is preventing from 1 page going to every single core on a 2000 page document.

The same applies for zipping and encoding. These work flows can 100% be evenly spread between all cores if written correctly.

Nothing is stopping a file being broken up into 1000 pieces if the program and settings are designed to be done so.

Many people assume this "law" applies to all types of workloads which is patently false.

They fail to understand the core assumption in this law, which is X% can not be threaded. Not every workload is affected by this as i showed above.

Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours using a single processor core, and a particular part of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (p = 0.95) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20 times (1/(1 − p) = 20). For this reason parallel computing with many processors is useful only for very parallelizable programs.

FR@NK said:Going from 16 to 32 threads wont help much even if the overhead is only 5%. Then remember that more cores normally means lower clock speeds which hurts the "speedup" aswell.

Amdahl's law assumes a single synchronous task. If you have a number of independent tasks you can scale pretty perfectly up to any number of processors, provided you are not limited by other factors like IO.

BTW; if by mentioning going from 16 to 32 threads you are referring to SMT, these are just virtual cores, and you can never use that as a benchmark of scaling in multithreading.

HopelesslyFaithful said:
The same applies for zipping and encoding. These work flows can 100% be evenly spread between all cores if written correctly.

Nothing is stopping a file being broken up into 1000 pieces if the program and settings are designed to be done so.

Just a little note; many types of encoding, compression, etc. can't be parallelized due to dependencies, but once you design something to be independent, then you can scale like you describe.

HopelesslyFaithful said:
Many people assume this "law" applies to all types of workloads which is patently false.

They fail to understand the core assumption in this law, which is X% can not be threaded. Not every workload is affected by this as i showed above.

I remember back in school, academia loves laws, theorems, postulates and quotes, and this BS were what the exams were about, not actual deep technical understanding.

As you mentioned, Amdahl's law is a specific case under certain preconditions, and when these conditions apply the conclusion is obvious. These kinds of "laws" just create more confusion than anything else, since people believe these are universal laws. People would be better served by not caring about them at all.

HopelesslyFaithful said:The same applies for zipping and encoding. These work flows can 100% be evenly spread between all cores if written correctly.

Incorrect.

You will always need some non multithreaded code that divides up the work between all the threads and then puts all the data together afterwards.

efikkan said:Amdahl's law assumes a single synchronous task. If you have a number of independent tasks you can scale pretty perfectly up to any number of processors

The benchmark in this news post is running one single task at a time....it doesnt run all of the tests concurrently.

efikkan said:BTW; if by mentioning going from 16 to 32 threads you are referring to SMT, these are just virtual cores, and you can never use that as a benchmark of scaling in multithreading.

Using SMT threads is fine. The multithreaded software sees 32 threads and will use 32 threads. Turning off SMT will change the speedup factor between the comparison of these processors but thats not what was tested in these benchmarks listed in this news post.

AMD plan to charge him 850$, why someone to pay so much for 26.000 score if Intel could reach 35.000 for 1000$. Intel i9 with 16 cores will reach over 40.000.

X299 is than not so bad option, when price of i9 line start to go down in next years people will have opportunity to buy them later. If they performance are far better than i9-7900X they will work as Xeons capable to OC. And you know how long Xeons serve customers, that's not platform for change on 10 months as mainstream just because new chipset arrive or new socket.

Even if it's only about on par with a 10 core 7900X, if they price it right, it's probably going to sell.

The main issues seem to be that X399 is going to be an even more expensive platform than X299, you'll need a new cooler because of how massive the ThreadRipper IHS is, and general issues with it being an MCM of 2 Ryzen dies.

If the scores don't change much until release, it does show that the infinity fabric between the two dies is pretty poor. I'm also expecting there to be issues with PCI-E lanes and RAM compatibility, because all of TR's features are split between the two dies.

Also, a lot of people are jumping onto X299 - the best boards and CPUs are mostly sold out already (for now) - I wonder how many people will wait and see if TR is better than this, or just give up and jump off the hype train before it crashes into a blue wall...After all, this "leak" could be straight BS.

Hood said:Also, a lot of people are jumping onto X299 - the best boards and CPUs are mostly sold out already (for now) - I wonder how many people will wait and see if TR is better than this, or just give up and jump off the hype train before it crashes into a blue wall...After all, this "leak" could be straight BS.

there is no debate that intels CPU is better but for people who can get away without super good single thread and can effectively use all the cores of the AMD CPU it is a very attractive (potentially) system especially with all the PCIe lanes. Many people are PCIe lane starved and this is a very attractive offer. Also AMD might support ECC which intels do not so anyone who needs ECC and wants decent single thread AMD is the only option.

If Intel offered ECC support on HEDT they would steal a lot of AMDs customers.

Hood said:Also, a lot of people are jumping onto X299 - the best boards and CPUs are mostly sold out already (for now) - I wonder how many people will wait and see if TR is better than this, or just give up and jump off the hype train before it crashes into a blue wall...After all, this "leak" could be straight BS.

Sorry, that is not accurate. Is the other way around actually. Nobody almost is jumping to the new platform due to 1 major thing. Unavailability of quality motherboards and insufficient stocks of the existing ones, the main reason most of them are sold out....