Hyper-threading doesn't help with multiple monitors, it helps with what people RUN on multiple monitors. Hyper-threading increases the average parallel compute throughput. It's the [partly] reason an i3-4130 can trade blows with an i5-2400S in parallel workloads. It's also the reason that some people get better results with an FX-8350 than a similarly priced i5. If i'm gaming on one monitor, and want to keep the stock tickers and a movie playing on the other, then I may have a way to leverage some additional parallelism afforded by hyper-threading.

Once you get past the FX-6300 (which would represent the entry level tuner/gaming performance platform) or an i3-4130 (a chip that delivers 65% of the performance of an i5-4670 for half the price), every rung of the performance ladder is potentially expensive and could be rationalized as poor value. (Note: you can buy the FX-6300 plus a very well made motherboard for less than the cost of the i5-4670K, so they are separated by well spaced rungs on the cost ladder). Considering the performance advantages of the 4670k over the FX-6300, and I could just as easily make a statement to the effect that it "isn't worth the $100 premium for so little" just like the i7 over i5 consideration. One could very reasonably and legitimately stop at EVERY imaginable rung of the ladder and make the case that it isn't worth another $100+ to step up another rung. As you go up the ladder, the rungs get further apart, and the performance gains are less and less for desktop machines. The i5-4670K costs 100% more than an FX-6300 yet will struggle to achieve 50% better performance, the i7-4770K costs half again as much as an i5, yet will rarely ever return more than 30% better performance, The 4930K is *almost* double the price of the 4770K and again, will struggle to deliver 30% performance improvements.

Once you get out of the material economics in the basement, where you're paying for the packaging and the raw materials and the convenience of being able to purchase it at-all, more than the actual performance (like the Celerons/A4/A6, most junker GPUs etc), and work your way up the desktop consumer ladder, you are always "approaching" the transition zone to an entirely different hardware economy: compute density. The closer you get to this separate "zone" of computer hardware economics, the less performance/$ you will get for a desktop machine. In the compute density economy, $1000-2000 CPUs are the norm, because they don't have to compete with each-other directly, but with what it would cost to IMPLEMENT each-other (more machines vs less machines, more space vs less space, overall compute efficiency and density, etc). The $2000 server CPU that is only 15% faster than the $1000 CPU in the compute density economy, is actually competing with what it would cost to implement the additional slower CPUs, ALL THINGS CONSIDERED (including the real-estate).

If you look at a machine as the price of the individual components, i5 vs i7 etc, it's easy to rationalize any stopping point. However, if you step back, and look at the machine *as a whole* (the same way the compute density economics work), those $100 "rungs" on the ladder are often only going to represent a 5-15% or so increase in the entire cost of the machine. A 10% cost increase, for up-to 30% performance increases (depending on workload), and suddenly the i7 vs i5 has been re-legitimized. The cost of the monitor/keyboard/mouse/speakers/motherboard/PSU/chassis/HD/SSD/ODD/RAM/OS(unless linux) can't be avoided. You're going to be "out" that money one way or another to build a rig. At that point, with $600-1000+ already invested, there's a lot of legitimate reasons to consider "getting the most" out of that semi-static investment by gracing that useless stack of stuff with a decent CPU and GPU, turning it into a computer.

Perspective changes everything. Furthermore, for many, there is *value* the novelty of owning particular nice things regardless of their performance/% ratio.

DrMetal,

The hardware configuration of the new game consoles isn't really "available" on the desktop right now, except in Kaveri (closest similar architecture), which represents a fraction of the GPU configuration compared to the console design. Any decent gaming rig built today, is not going to accept any sort of "direct port" that utilizes hardware the same way as it works on the console anyway. AM3+ is has been relegated to legacy status, and certainly does not represent AMDs "platform of the future" that will narrow the gap from console to PCs (easier porting, etc). If that's what you are after, then I would suggest building a high-value rig for now, and then re-check the hardware landscape in 2 years and see what has happened.

Those are some very interesting points. Never thought about it that way.

Nonetheless, correct me if i'm wrong, but I don't believe hyperthreading actually helps with running multiple poorly optimized programs at once. Rather, hyperthreading helps a lot when it comes to very well optimized tasks: encoding (video, sound, streaming). Assuming our friend wants to stream at high resolution/quality then sure, the i7 pays for itself. If he's going to turn off hyperthreading to get a higher overclock out of the i7 since hyperthreading expels extra heat, it's kind of silly to get an i7 over an i5.

$100, even if it's less than %10 of the total system cost, still isn't nothing. $100 is the difference between an H81 and a good Z87. It's also pretty much a free PSU (with some money left over). I also think it's wrong to say that i7 performs 30% better than an i5. The situations where that's true are pretty limited, especially for a gaming rig. Call me stingy, but knocking off 10% on the total system cost seems like a pretty damn nice deal to me. Even 5% is good. There are only a handful of tasks where the i7 beats the i5 hands down and if I'm not going to be doing those tasks on a regular basis, I'll just keep my money and accept slightly lower performance whenever I will do those tasks.

Well, at this point I'm just playing the devil's advocate really. Thank you for the big post, no matter what you think is best, there are some interesting ideas in there I hadn't thought of before.

Those are some very interesting points. Never thought about it that way.

I've had a lot of time to think about it I've been infatuated with computer hardware, comparing and contrasting it, since I was like 12 years old. (I'll be turning 30 in April).

Quote:

Nonetheless, correct me if i'm wrong, but I don't believe hyperthreading actually helps with running multiple poorly optimized programs at once. Rather, hyperthreading helps a lot when it comes to very well optimized tasks: encoding (video, sound, streaming). Assuming our friend wants to stream at high resolution/quality then sure, the i7 pays for itself. If he's going to turn off hyperthreading to get a higher overclock out of the i7 since hyperthreading expels extra heat, it's kind of silly to get an i7 over an i5.

Hyper-threading, is often misunderstood. In simplest terms, it can be summed up as follows: The ability execute an integer instruction, while simultaneously performing a floating point calculation.
It means that rather than have to choose between the 2 operations on each "cycle," (like a Pentium or i5 does), it can do both at the same time, within the same core. This improves IPC, and compute efficiency when leveraged (improved saturation minimizes losses, note: the i7 under a fully saturated workload will only use about 15% more power than an i5, while producing 30% higher compute throughput)

When hyper-threading first came out, leveraging it was more difficult, and the implementation had many flaws. Software wasn't compiled for it, and the OSs scheduling was garbage. Over a decade of refinements to the technology and software side changes (both in the OS's and in the way that software is compiled to take advantage) have lead to a very mature technology that can offer performance scaling in almost any mixed workload that has spawned enough threads (whether it's from different programs or not). Any mixed parallel workload (integer/FPU) can theoretically scale into hyper-threading these days. Hyper-threading, does indeed improve performance when a computer is multi-tasking. If this were not the case, they wouldn't bother with it at the server level, where the workload is often hundreds of separate services running. Only a handful of entry level Xeons (the cheapest in each class) have hyper-threading disabled.

If you want to see how good hyper-threading scales these days when the workload is parallel enough, look at the i3-4130 in gaming benchmarks (since almost all games leverage up to 3-4+ threads these days). Effectively an i7 that has been cut in half, it is able to keep up with the i5 better than it's "2-core" class might suggest. It is hyper-threading that closes the gap. In this particular comparison, the i3 is placed in workloads that are forced to scale into hyper-threading wherever they can because there is nowhere else to go. Often we see minimal scaling from the i5 to i7 in these same games because the i5 has already offered all the parallelism that the game engine can leverage. Appreciating the i7 in this case, demands that we add additional workload (like a second monitor with other apps running).

The only time hyper-threading increases thermal dissipation, is when it is being utilized. When it is being utilized, it will always be able to return more performance scaling than any additional overclock that would be afforded with it disabled. As I already pointed out, hyper-threading, under full saturation, can improve throughput up to 30% at a respectable 15% increase in power consumption. The same increase in power consumption applied towards an overclock would buy approximately a 5% clock speed improvement. Increasing clock speeds by 30%, would require a 60% increase in power consumption. If the workload in question doesn't have a way to leverage hyper-threading, then having it on or off should have very little effect on thermals for max overclocks because it is going unused anyway. Turning it off would just be a way to "optimize" an overclock for a specific workload (getting that extra 5% overclock for the desired workload while preventing the chip from overheating under an unexpected parallel workload, or a workload that the user does not care to optimize anyway).

Quote:

$100, even if it's less than %10 of the total system cost, still isn't nothing. $100 is the difference between an H81 and a good Z87. It's also pretty much a free PSU (with some money left over). I also think it's wrong to say that i7 performs 30% better than an i5.

I believe you may have inadvertently taken me out of context. I said "up-to 30%"

Interestingly enough, I was trying to be as conservative as I could with the 30% number to prevent this. There are some workloads where the scaling is even better due to a combination of hyper-threading AND the larger faster cache. Either way, I probably should have placed greater emphasis on the "up-to" part. (I'll try to remember to place such a point in bold in the future)

The ~$220-260 E3-1230V2/V3 series chips on a B85 board, can offer competitive performance with an i5-"K" on a Z board for less money if the workloads are parallel enough, especially for users who are questioning whether or not they want to overclock or not, or support multiple GPUs. (this thread may apply). The E3 can offer that "i7" class performance to people who don't need an iGPU. An E3+B85 may afford the opportunity to buy into more GPU, or an SSD.

Quote:

The situations where that's true are pretty limited, especially for a gaming rig. Call me stingy, but knocking off 10% on the total system cost seems like a pretty damn nice deal to me. Even 5% is good. There are only a handful of tasks where the i7 beats the i5 hands down and if I'm not going to be doing those tasks on a regular basis, I'll just keep my money and accept slightly lower performance whenever I will do those tasks.

The neat thing about it, is that, if you go through a process of rationalizing the i5, and then actually buy the i5, then you will have achieved a harmony with your rationalization that has value in and of itself. If someone else can rationalize the i7, or the E3, or the FX chip, and purchases it, then they will have achieved the same fulfillment that you have in your purchase decision.

I'm also a cheap-A$$ (ask my wife). Which is why I have a $110 CPU, a $50 MOBO, and a $50 HSF (less combined cost than an i5). I was able to rationalize that this would be the best value for me, and I love it. The novelty of overclocking it to the same performance as an E3-1230V2 in parallel workloads (a $226 chip) has been extraordinarily fun and rewarding.

Quote:

Well, at this point I'm just playing the devil's advocate really. Thank you for the big post, no matter what you think is best, there are some interesting ideas in there I hadn't thought of before.

Thanks for nice Post mdocod
If i understand your post correctly, It worth investing into 4770k or 4930k for hyper threading in the way I am going to use the system.
Heat should not be a problem, I am looking into investing into Custom water cooling loop or a few (will need to find out, once all hard ware to confirmed, what i will need exactly to loop CPU and eventually 2 GPUs, no point worrying about that)
Just looking at it, it will have to be the slowest of 4930k, but I should be able to afford it, might have to borrow some money, but as I said, don't mind that part much for a good investment.
the question now becomes what mother board and memory to go with that? With mother board, will agree that is one of 2 components I do not want to cut any corners with (other one is PSU). But, motherboard is probably the hardest component to understand (in my opinion at least). And memory, at the moment in my plans corsair 8gb 1866mhz vengeance low kit (twice obv)), but opened for anything more optimal.

$100 is a lot of hotdogs.
Those hotdogs are a lot more important than the hyperthreading.

Thats how I fell about Beer, thats why after building this PC i will go and live with my friend for a week. Mooch of food, beer and utility bills? Why not.
Worse comes to worst in budget range, I always have option to sell my Motorbike (and associated gear) for quick cash. It was a silly thing top buy, but was necessary at a time, and I don't use it anymore.

Quote:

Originally Posted by mdocod

Hyper-threading, is often misunderstood. In simplest terms, it can be summed up as follows: The ability execute an integer instruction, while simultaneously performing a floating point calculation.

Hey! I Remember learning about that in CPU Architecture Module in my second year. Being a dropout and never hearing about it again didn't improve my memory on it =3
question about Mobo for 4930k still stands thou.

I like the Asus X79 Deluxe and Crucial Ballistix Tactical 1.35V (the yellow sort of matches the board, and it's great quality RAM. Low voltage and tight timings means all sorts of overclocking headroom).

The 2011 socket platform is largely underutilized in gaming workloads. In the same way that the move from the i5 to the i7 on the 1150 socket platform has questionable scaling (though I think you might appreciate it), scaling up to a hex-core on the 2011 socket is even less return on investment. At that range, for a gaming build it is mostly about the novelty and fun of owning the behemoth flagship workstation/server platform. Having 14 SATA ports and over 50GB/s memory bandwidth becomes mostly just about having excess for the sake of having excess (Which can be fun, especially if you like to tinker, performance tune, or compete in performance tuning/benching etc).

Ask anyone who bought into X58 back in it's prime, and most will tell you that it was:

A: A lot of money, that could have been spread out into incremental upgrades on the regular consumer platform over the years instead.
B: A really fun platform to own and tinker with.
C: Still so powerful that there's no real compelling reason to upgrade, even 5 years later.Edited by mdocod - 3/6/14 at 11:08am

Option B and C sounds fun to me.
And nothing wrong with excess for sake of excess, that's one of principles which makes capitalism work after all.
Besides, If i have extra capabilities, not like there is nothing to do with extra processing power.
Streaming/skype HD movie and game playing at the same time. If the setup is indeed that excessive I don't have to shut down Antivirus, all the little visual OS UIs for just a little extra processing power.
Also color coordination doesn't matter much for me. For water cooling planning to get as many different color tubing as I can get, with same liberal attitude applied to any other lighting I will end up plasing inside it =3