Posted
by
timothy
on Saturday June 07, 2014 @05:00AM
from the free-lunch dept.

Vigile (99919) writes "Since the introduction of Intel's Ivy Bridge processors there was a subset of users that complained about the company's change of thermal interface material between the die and the heat spreader. With the release of the Core i7-4790K, Intel is moving to a polymer thermal interface material that claims to improve cooling on the Haswell architecture, along with the help of some added capacitors on the back of the CPU. Code named Devil's Canyon, this processor boosts stock clocks by 500 MHz over the i7-4770K all for the same price ($339) and lowers load temperatures as well. Unfortunately, in this first review at PC Perspective, overclocking doesn't appear to be improved much."

Not everyone is limited by speed. In fact a vast majority of the users are limited by network latency and bandwidth. They will not be helped by overclocking much. Even among people running heavy duty local executables farm out graphics to GPU. So non-GPU heavy duty local apps limited by clock speed makes up for a small subset of the users and applications.

For common people video re-rendering is probably the most CPU intensive task. Even that could be farmed out to GPU, if not already pretty soon.

There are some power users in the accounting and finance department who commit crimes against software using atrociously written Excel macros. Their spreadsheet update time scales as the square or cube of the number of cells. They blame the computer for being slow and demand faster computers. Even this group does not benefit by overclocking because Excell is such a bloat, it triggers so many page faults and long (out of L1, L2, L3 cache) fetches.

So might benefit? May be people like me, doing finite element analysis, mesh generation or other such physics simulations.

For a vast majority of the users, reducing the temperature and applying it to more reliability, longer lasting, less power consuming chips would give bang for the buck. But that is difficult to test, does not garner press reports and more importantly cuts into future sales. So they will obsess with overclocking gimmicks.

Rendering the scene is the most computationally intensive task. It has already been farmed out to the GPU. Rest of the gaming software does not benefit as much by CPU. Many of the game algorithms are embarrassingly parallel. They nicely scale up in multi core chips. So most threads in a gaming executable idles much of the time. They don't benefit as much by overclocking.

Secondly gamers form a very small segment of the computer users. The mobile phone gaming market is bringing in so many non-traditional ga

As a gamer, there are a lot of games that are CPU bound. ArmA 3 and Skyrim are great examples, basically any game with a complicated physics engine or that's dependent on a lot of AI (Gamer AI, means NPCs) calculations. A faster CPU can offer a much better performance than a slower one with the same video hardware

Those things go mostly hand in hand, either you can increase performance or reduce power usage. Things get a little more complicated as you approach SoC power levels, but in general the one with the highest performing chips also can scale them down to the lowest consuming chips. There's a reason Intel can sell $500-1000 mobile chips, in that power envelope AMD doesn't have a match on performance so Intel is free to set the price at will.

Yes, most simulation companies support High Performance Clusters to varying degrees. In hours, bought out, rented from Amazon or Microsoft... Basically the simulation starts with a relatively small geometry file, user set up some small data sets of material properties etc. CPU and diskusage and data transfers balloon during meshing and go through the roof in solution. Once the matrix is inverted, solution is extracted, post processed and compressed the final output would be only a fraction of the peak data

May be people like me, doing finite element analysis, mesh generation or other such physics simulations.

Probably not even you. Such tasks demand a huge amount of memory and the bottleneck is often the memory channels on the CPU chip per core. If you scale the benchmark result with the amount of cores, a CPU with 4 cores and 4 channels will outperform a CPU with, say, 6 cores and 3 channels even if the 6-core CPU is clocked higher. Given that the software scales nicely, it will be better to add more CPUs to the cluster than increasing the clock speed. Also, if CUDA takes off, the clock speed of the CPU will be

Let us assume for a moment that you really do need faster machine. How many people are like you? We (my.sln file would take about 2 hours for a full clean build on a dual Xeon, 16 cores , 32 hyper thread machine, git repo on 256 GB SSD, 128 GB memory, last time I did a line count it was well over 3 million loc spread over some 30 libraries 30000 cpp, h files) are a minority. All those gamers and finance executives buying very powerful machines are subsidizing our power machines. My machine costs just over

But the users of i7s are not the friendly receptionist at work or Grandma checking her FB. These are almost all gamers mixed with software developers and a few niches.

So yes CPU is important. If not then the rest of the normal users buy i3s and maybe an i5 if they have more cash and do heavy workflows. Unless you make 1 tb video edits or play Wolfenstein the average users won't see any benefit between a 4 core i5 or i7.

So might benefit? May be people like me, doing finite element analysis, mesh generation or other such physics simulations.

Actually, finite element analysis is one of the fields that stands to benefit most from the use of GPUs. Although it does depend on just what it is you are doing.

I remember back in the early 90s when our office got a brand-new 25MHz 486. We had great expectations for it. We set up a groundwater flow model (I forget how many cells it was) on the Friday it came in, after I got it all set up. The simulation ran the rest of Friday... we went home. Monday when we came back it was still running...

Agreed. It's completely irrelevant to most use cases. But not all. For instance, pro audio, which is a part of what I do, still benefits greatly from increased CPU speed as well as reduced cache latency. The tools I use have not been architected to take advantage of the immense power of modern GPUs. Eventually they likely will be, but, for now, every couple years' worth of CPU improvements does make a significant difference for what I do.

One sample doesn't show anything. They even mention that in TFA. Also, it appears Intel reported it achieving 5.5ghz on air cooling during Computex for their OC challenge. I wouldn't be the least bit surprised if they cherry picked the cores for the challenge, but it shows they're at least capable of higher.

How much one can overclock has always been a roll of the dice and down to luck. With that being said it is well known in the water cooling community that the cpu thermal interface on sandy/ivy/original haswell was a thermal cooling efficiency limiting factor on water-cooled rigs and there were no spectacular results even on super lucky draws. I am very excited to see how this might change with this new thermal interface material .

Apart from games and video encoding, we've kind of reached a plateau of user requirements such that a platform like the AMD Kabini would be plenty for most of the average user. Since the average joe can live with only a tablet, I'm guessing even Kabini is overkill by comparison.

Agreed. I've said it before and I'll say it again: significant performance increases in the x86 world are a thing of the past.

There simply isn't enough money in the market chasing higher performance to make the development cost of faster chips worth the investment.

This is actually an opportunity for AMD. I expect it costs AMD less to catch up to Intel than it costs Intel to push to faster speeds, and since Intel isn't being paid anymore to get faster, AMD can, like the slow and steady tortoise, gradually catch up to Intel. I believe it will take a couple more years, but if AMD survives that long, I believe that it will have achieved near performance parity with Intel by then.

And then neither company's offerings will get much faster, forever thereafter, until there is some new kind of 'killer app' that demands increased CPU speeds that people are willing to pay for (could happen anytime; but the way things are going, with everyone moving to mobile phones and pads, I think we're in for a relatively long haul of form factor and power usage dominating the marketable characteristics of CPUs).

I believe Intel will continue to hold a power advantage over AMD for a long time though, but AMD will gradually narrow that gap as well.

The thing is, AMD will be fighting Intel for a stagnating/shrinking CPU market, and more than likely AMD won't increase its margins significantly during this process, it will just reduce Intel's margins. Not really good news for either company, but worse for Intel.

yes, good times.. anything seemed possible by just waiting a few months until the hardware could take it.On the other hand now and going forward good software developers will be valued highly because efficient cpu resource usage and smp are difficult but increasingly important.

Yeah, but there comes a point where it is technologically easier to increase the amount done with each clock tick than it is to make logic that can switch faster. We reached that point about 10 years ago....

Yeah, I miss those days in the late 1980's... it wasn't just the clock speeds that were revving upwards, the hard disk drives were getting larger, doubling every six months (10, 20, 40, 80, 160 Megabytes... we're up to 2 Terabytes now), the screen resolutions were getting larger (320x200, 640x480, 1024x768, 1280x1024, 1600x1200) as well as pixel sizes (8-bit, 16-bit, 24-bit). Now we're into quad-monitors with HD resolutions, so maybe that is still going upwards.

im still running First generation 1366 and i havent even Begun to find a reason to upgrade anything but the video and thats just to stay on top of gaming.it would be nice to get usb3 i suppose but truthfully thats just a pcie addin card away.