Posted
by
Soulskill
on Saturday January 04, 2014 @06:07PM
from the go-big-or-go-home dept.

New submitter asliarun writes "David Kanter of Realworldtech recently posted his take on Intel's upcoming Knights Landing chip. The technical specs are massive, showing Intel's new-found focus on throughput processing (and possibly graphics). 72 Silvermont cores with beefy FP and vector units, mesh fabric with tile based architecture, DDR4 support with a 384-bit memory controller, QPI connectivity instead of PCIe, and 16GB on-package eDRAM (yes, 16GB). All this should ensure throughput of 3 teraflop/s double precision. Many of the architectural elements would also be the same as Intel's future CPU chips — so this is also a peek into Intel's vision of the future. Will Intel use this as a platform to compete with nVidia and AMD/ATI on graphics? Or will this be another Larrabee? Or just an exotic HPC product like Knights Corner?"

They tested this for the next ipad. While apple felt the 5 second battery life was too short to be practical, the beta testers were more concerned about the apple shaped 3rd degree burns imprinted on their thighs and palms

eDRAM isn't very well defined, but it basically boils down to "DRAM manufactured on a modified logic process," allowing it to be placed on-die alongside logic, or at the very least built using the same tools if you're a logic house (Intel, TSMC, etc). This is as opposed to traditional DRAM, which is made on dedicated processes that is optimized for space (capacitors) and follows its own development cadence.

The article notes that this is on-package as opposed to on-die memory, which under most circumstances would mean regular DRAM would work just fine. The biggest example of on-package RAM would be SoCs, where the DRAM is regularly placed in the same package for size/convenience and then wire-bonded to the processor die (although alternative connections do exist). Conversely eDRAM is almost exclusively used on-die with logic - this being its designed use - chiefly as a higher density/lower performance alternative to SRAM. You can do off-die eDRAM, which is what Intel does for Crystalwell, but that's almost entirely down to Intel using spare fab capacity and keeping production in house (they don't make DRAM) as opposed to technical requirements. Which is why you don't see off-die eDRAM regularly used.

Or to put it bluntly, just because DRAM is on-package doesn't mean it's eDRAM. There are further qualifications to making it eDRAM than moving the DRAM die closer to the CPU.

But ultimately as you note cost would be an issue. Even taking into account process advantages between now and the Knight's Landing launch, 16GB of eDRAM would be huge. Mind bogglingly huge. Many thousands of square millimeters huge. Based on space constraints alone it can't be eDRAM; it has to be DRAM to make that aspect work, and even then 16GB of DRAM wouldn't be small.

It may not be eDRAM, but I'm not sure what else Intel would easily package with the chip. We know the 128 MB of eDRAM on 22 nm is ~80 mm^2 of silicon, currently Intel is selling ~100 mm^2 of N-1 node silicon for ~$10 or less (See all the ultra cheap 32 nm clover trail+ tablets where they're winning sockets against allwinner, rockchip, etc., indicating that they must be selling them for equivalent or better prices than these companies.) By the time this product comes out 22 nm will be the N-1 node. In additi

An Nvidia Quadro card costs $8,000 for an 8GB card. I would consider $8,000 "many thousands of dollars". Nobody is suggesting Knights ____ is competing with any consumer chips CPU or GPU. I have a $1,500 Raytracing card in my system along with a $1,000 GPU as well as a $1,000 CPU. If this could replace the CPU and GPU but compete with a dual CPU system for rendering performance I would be a happy camper even if it cost $3-4k.

I wonder how nice these will be to program. The "just recompile and run" promise for Knights Corner was little more than a cruel joke: to get any serious performance out of the current generation of MICs you have to wrestle with vector intrinsics and that stupid in-order architecture. At least the latter will apparently be dropped in Knights Landing.

For what it's worth: I'll be looking forward to NVIDIA's Maxwell. At least CUDA got the vectorization problem sorted out. And no: not even the Intel compiler handles vectorization well.

Actually the in-order execution isn't so much of a problem in my experience. The vectorization is a real problem. But you essentially have the same problem except it us hidden in the programming model. But the performance problem are here as well.

Anybody that understand gpu architecture enough to write efficient code there won;t have much problem using the mic architecture. The programming model is different but the key diffucultues are essentially the same. If you think about mic simd element as a cuxa th

It's not entirely syntactical. Local shared memory is exposed to the CUDA programmer (e.g., __sync_threads()). CUDA programmers also have to be mindful of register pressure and the L1 cache. These issues directly affect the algorithms used by CUDA programmers. CUDA programmers have control over very fast local memory---I believe that this level of control is missing from MIC's available programming models. Being closer to the metal usually means a harder time programming, but higher performance potenti

I don't understand. Mic is your regular cache based architecture. Accessing L1 cache in mic is very fast (3 cycle latency if my memory is correct). You have similar register constraints on mic with 32 512-bit vectors per thread(core maybe). Both architectures overlap memory latency by using hardware threading.

I programmed both mic and gpu, mainly on sparse algebra and graph kernels. And quite frankly there are differences but i find much more alike than most people acknowledge. The main difference in my op

I wonder how nice these will be to program. The "just recompile and run" promise for Knights Corner was little more than a cruel joke

I tried recompiling and running some OpenCL code (that previously was running on GPUs). It was "just recompile and run" and the promises about performances were kept. But still, OpenCL is not what most people consider "nice to program".

Yeah, OpenCL is a different thing. But if you talk to laymen, they will often repeat the marketing speed that you take your OpenMP(!) code written for traditional multi-cores, recompile and enjoy... Not true, in my experience.

Intel's AVX-512 is really friggin cool, and a huge departure from their SIMD of the past. It adds some important features -- most notably mask registers to optimally support complex branching -- which make it nearly identical to GPU coding so that compilers will have a dramatically easier time targeting it. I doubt it will kill discrete GPUs any time soon, but it's a big step in that long-term direction.

The recently revealed Mill architecture [ootbcomp.com] is far more interesting, and also offers a much more attractive programming model. It is a highly orthogonal architecture naturally capable of wide MIMD and SIMD. Vectorization and software pipelining of loops is discussed in the "metadata" talk, and is very clever and elegant. Those who have personally experienced the tedium of typical vector extensions will appreciate it all the more.

Based on sim, the creators expect an order of magnitude improvement of performan

In my opinion, the point of using x86 in order to reuse units from desktop/server CPUs is the base of these experiments. The counterpart is to deal with the x86-mess everywhere. This seems a desperate reaction to AMD's CPU+GPGPU, which also has drawbacks. I bet that both Intel and AMD prefer to keep memory controller as simpler as possible, having a confortable long-run, without burning their ships too early. E.g. a CPU+GPGPU in the same die, with 8 x 128 bit separate memory controllers configured as NUMA (

20 years? I would be very doubtful regarding any prediction beyond the point where current process scaling trends finally break. Note, they might break the other way. Switching to a non-silicon material might allow higher frequencies which will again shift the tradeoff between locality, energy, and production cost. But there is no reason, no reason at all, to expect the current style to last for more than ten years, while you could be quite right that it could stay much the same for the next five years or s

This is another one of those IBM things made from the most rare element in the universe: unobtainium. You can't get it here. You can't get it there either. At one point I would have argued otherwise, but no. Cuda cores I can get. This crap I can't get. Its just like the Cell Broadband engine. Remember that? If you bought a PS3, then it had a (slightly crippled) one of those in it. Except that it had no branch prediction. And one of the main cores was disabled. And you couldn't do anything with the integrated graphics. And if you wanted to actually use the co-processor functions, you had to re-write your applications. And you needed to let IBM drill into your teeth and then do a rectal probe before you could get any of the software to make it work. And it only had 256MB of ram. And you couldn't upgrade or expand that. With IBM's new wonder, we get the promise of 72 cores. If you have a dual-xeon processor. And give IBM a million dollars. And you sign a bunch of papers letting them hook up the high voltage rectal probes. Or you could buy a Kepler NVIDIA card which you can install into the system you already own, and it costs about the same as a half-decent monitor. And NVIDIA's software is publicly downloadable. So is this useful to me or 99.999% of the people on/.? No. Its news for nerds, but only four guys can afford it: Bill G., Mark Z., Larry P. and Sergey B..

OK, we have yet another mesh of processors, an idea that comes back again and again. The details of how processors communicate really matter. Is this is a totally non-shared-memory machine? Is there some shared memory, but it's slow? If there's shared memory, what are the cache consistency rules?

Historically, meshes of processors without shared memory have been painful to program. There's a long line of machines, from the nCube to the Cell, where the hardware worked but the thing was too much of a pain to program. Most designs have suffered from having too little local memory per CPU. If there's enough memory per CPU to, well, run at least a minimal OS and some jobs, then the mesh can be treated as a cluster of intercommunicating peers. That's something for which useful software exists. If all the CPUs have to be treated as slaves of a control machine, then you need all-new software architectures to handle them. This usually results in one-off software that never becomes mature.

Basic truth: we only have three successful multiprocessor architectures that are general purpose - shared-memory multiprocessors, clusters, and GPUs. Everything other than that has been almost useless except for very specialized problems fitted to the hardware. Yet this problem needs to be cracked - single CPUs are not getting much faster.

Historically, meshes of processors without shared memory have been painful to program

Which is why we don't see those GPU cards in absolutely every place where there is a massively parallel problem to solve. Even 8GB is not enough for some stuff and you spend so much time trying to keep the things fed that the problem could already be solved on the parent machine.

The mesh replaces the ring bus used in the current generation MIC as well as mainstream Intel x86 CPU's. Each node in the mesh is 2 CPU cores and L2 cache. The mesh is used for connecting to the DRAM controllers, external interfaces, L3 cache, and of course, for cache coherency. The memory consistency model is the standard x86 one. So from a programmability point of view, it's a multi-core x86 processor, albeit with slow serial performance and beefy vector units.

Second, while Knights Landing can act as a bootable CPU, many applications will demand greater single threaded performance due to Amdahl’s Law. For these workloads, the optimal configuration is a Knights Landing (which provides high throughput) coupled to a mainstream Xeon server (which provides single threaded performance). In this scenario, latency is critical for communicating results between the Xeon and Knights Landing.

So there will be a useful mainstream CPU closely coupled with a bunch of vector oriented processors that will be hard to use effectively. (Also from TFA).

The rumors also state that the KNL core will replace each of the floating point pipelines in Silvermont with a full blown 512-bit AVX3 vector unit, doubling the FLOPs/clock to 32.

So unless there is a very high compute to memory access ratio this monster will spend most of it's time waiting for memory and converting electrical energy to heat. Plus writing software that uses 72 cores is such a walk in the park...

Some stuff actually is. It depends on how trivially parallel the problem is. With some stuff there is no interaction at all between the threads - feed it the right subset of the input - process the data - dump it out.

Some stuff actually is. It depends on how trivially parallel the problem is. With some stuff there is no interaction at all between the threads - feed it the right subset of the input - process the data - dump it out.

More importantly, for some applications a limited amount of very low-latency/high-bandwidth communication is enough to give spectacular performance improvements. In those cases, the fully coherent x86 model, kept up by this kind of cache and memory architecture, will do wonders, compared to an MPI implementation with weaker individual nodes, but also possibly against (current) nVidia offerings. It's harder to say how it will stack up against Maxwell.

Where are you getting Atom cores from? I read up QPI, which this design will be using, and that is used only w/ Xeons and i7s. So this chip looks pretty much like the successor to Xeons and i7s, and will probably be seen either in servers, or in Mac Pros, but not likely in your average laptop, much less tablet.

From this Extremetech article [extremetech.com], which has a slide speaking of the Knights Landing processor architecture having "up to 72 Intel Architecture cores based on Silvermont (Intel(R) Atom processor)"?

You're both correct. The original Atom cpu was built separately and started before the i7 arch. The new Silvermont "Atom" is based a lot of the i7 arch. It is a huge upgrade to the Atom line. It's like the original i7 fine tuned for power and running on 22nm. Very strong OoO pipeline design. The low power usage is great for a many core design because efficiently is more important than single-threaded performance.

Couple things:1) The 22nm Silvermont Atom cores are a complete redesign over the badly aging atom cores. They are much more powerful, and much more powerful per watt.2) These chips aren't going to replace CPUs, they are most likely going to compete with Nvidia Tesla - a PCIe card that highly parallel workloads can be offloaded to. One CUDA core isn't very powerful but stick 2688 actives ones on a chip and for certain tasks you have a lot of power. The K20X Tesla is capable of 1.3 trillion double-precisio

It depends on the use case. There are many applications where this would shine. Sure if you want to play Quake 3 Arena it's not going to give you much at all, but if you're doing parallel processing for scientific or engineering applications this would rock.

This isn't intended for you if you can't think of what to do with all those cores.

This is for the high performance physics folks to whom the difference between 16 cores, 256 cores, and maybe even 8192 cores is a line in a config file.

It's also for the folks developing 24 megapixel RAW files (which Nikon's cheapest SLR spits out these days), where splitting the image into 64 sectors is no more difficult than splitting it into four, or for the folks doing video encoding which is pretty trivially parallelizabl

I think you'd be surprised how many real world day to day task can be and are parallelized: almost everything concerning audio and video (images or movies), searching, analyzing, rendering web pages, compiling, computing physics and AI for games.

I can't think of one computing intensive day to day action that is not parallelized or wouldn't be easy to do so.

I think you'd be surprised how many real world day to day task can be and are parallelized: [...] searching

I thought searching a large collection of documents was disk-bound, and traversing an index was an inherently serial process. Or what parallel data structure for searching did I miss?

rendering web pages

I don't see how rendering a web page can be fully parallelized. Decoding images, yes. Compositing, yes. Parsing and reflow, no. The size of one box affects every box below it, especially when float: is involved. And JavaScript is still single-threaded unless a script is 1. being displayed from a web server (Chrome doesn't support Web Workers in file:// for security reasons), 2. being displayed on a browser other than IE on XP, IE on Vista, and Android Browser <= 4.3 (which don't support Web Workers at all), and 3. not accessing the DOM.

compiling

True, each translation unit can be combined in parallel if you choose not to enable whole-program optimization. But I don't see how whole-program optimization can be done in parallel.

In my experience, most cases where compilation takes a long time involve multiple compilation units. I have a fair bit of experience with compiling linux distros professionally...when you're building glibc and the kernel and five hundred other packages it'll use as many cores as you can throw at it.

Parsing and reflow can be efficiently parallelized if sufficient parents have their heights determined by something other than their contents, for example, say the if the main parts of the documents have heights explicitly defined. Then they can be processed in parallel efficiently. Even without that, couldn't the children each be processed in parallel for a good portion of them, but possibly needing updating for properties that have dependencies outside of themselves? Yes, floats can cause some issues,

If your data is not indexed you are likely to be faster with multiple threads (if there is no other bottle neck like, for example, disk throughput).

Or RAM throughput.

Parsing: Why not?

Sure, the browser can parse multiple CSS files or multiple HTML files or multiple JavaScript files at once, just as the browser can decode multiple images at once. But the parser for a single file is a state machine. In order to "drop the needle" halfway into the byte stream and start parsing the second half on the second core, the parser would first have to know what state the state machine was in as of halfway into the stream. What parallelization were you thinking of?

As I wrote elsewhere [slashdot.org]: laying out a web page that includes float-styled elements. That fits 1) and 2), and it fits 3) on a netbook or tablet with an ARM or Atom processor. Or repaginating a document in a word processor, which happens every time the user enters enough text to make the current paragraph one line longer, deletes enough to make it one line shorter, or changes the styling of any span of text. Repagination may affect figures, references to page numbers elsewhere in the document, etc. Repaginating

There are parallel strategies to do some of these things. As far as I know text layouting is mostly done with dynamic programming algorihtms. These algorithms are usually very parallel.Even if they are not, you can always use some kinds of speculative algorithms to deal with that. You assume the 3 most likely scenario for line 1 and while line 1 is being processed, you layout line 2 multiple times using different assumption on line 1. This will not give you perfect parallelism but it will give you some impr

I don't see how the DEFLATE codec used by, say, PNG can be parallelized.

There are multiple ways to implement the deflate codec, some compress better than others on different source materials. The best implementations would try multiple variants in parallel and discard all but the best result. For current examples, running PNGOUT, OptiPNG, and DeflOPT in parallel for each PNG and discard the other two, but better approaches trying more variants for even better (albeit less) results are possible and likely to produce even smaller results.

However there are plenty that are. Geophysics, biochemistry, engineering and even editing home movies.I'd love some of these if they come off with better price/performance than an AMD system or even if they just beat it a lot on performance without being ten times the cost (sad state of the very top end of Xeons now).

This isn't for general-purpose use. See those floating-point specs? Those tell you exactly where this is going, because there is one class of user that just can't get enough floating point performance. Scientific HPC. Protein folding, molecular biology modeling, cosmological simulations, higher resolution seismic analysis, neural network simulation, quantum system modeling. All things that thrive on processing power. A chip like this could have a lot of scientific applications.

In practice, the percentage of a process on a single-user system that can be parallelized is rarely 100 percent. If one holds the performance of a core constant, even a 1000 core system will still run as slowly as a 1 core system on the fraction that cannot.

Keep in mind, Amdahl's law can be expanded to all processes that make up a system. Even if you are using a single process program, it can benefit from not having to share it's core with the various system processes.

Even if you are using a single process program, it can benefit from not having to share it's core with the various system processes.

Then there's not really much of a benefit to adding more than a dual core, which will probably end up running the application with which the user is interacting on one core and the background applications and system processes on the other. To go beyond that, you have to either parallelize the application, run more than one CPU-bound application at once (which most desktop PC users tend not to do), or run more than one user at once using dual monitors, dual keyboards, and dual mice (which most desktop PC ope

Then there's not really much of a benefit to adding more than a dual core, which will probably end up running the application with which the user is interacting on one core and the background applications and system processes on the other.

Not necessarily. A process could be CPU bound and prefer not to make it worse by also waiting for I/O completion. Let another core drive the filesystem and talk to the block device (which might be a soft RAID).

My system frequent;y enough is busy compressing video or doing large compiles in the background while I work in the foreground.

If all you're doing is word processing, single thread speed isn't all that important either since it's mostly waiting for you to press a key.

To go beyond that, you have to either parallelize the application, run more than one CPU-bound application at once (which most desktop PC users tend not to do)

Let another core drive the filesystem and talk to the block device (which might be a soft RAID).

My system frequent;y enough is busy compressing video or doing large compiles in the background while I work in the foreground.

Then you're not most users. I was under the impression that most users tend not to use soft RAID 5/6 or CPU-intensive file systems, compress large videos, or do large compiles. I too compress video and do compiles, but geeks such as you and myself are edge cases.

While most people probably don't do large compiles, the video compression is just for shows I record. In my case, it just happens to happen on a PC, others might use an appliance for that. My filesystem isn't particularly CPU intensive but no filesystem uses zero cycles.

The people not doing any of that probably wouldn't fully utilize the full speed of a single core either, so it's not much of an issue.

For shows you record from OTA, cable, or satellite, it doesn't have to be significantly faster than real time. How many tuners does your PC have? You could put one video encode on each core, plus another core for the audio encodes. But then I confess ignorance as to how much CPU power it takes to encode video at, say, full 1080p/24.

I haven't studied it very carefully, but I do know that 5 cores was significantly faster than real time (re-encodung MPEG2 to mp4) but 2 cores falls behind even if I don't do anything else. There's a lot of trade off there, if I accept less compression or lower quality video, it needs less CPU to accomplish it.

But then I confess ignorance as to how much CPU power it takes to encode video at, say, full 1080p/24

A lot. 1080P on a Core2Duo running 3.17Ghz, with H.264 you are looking at 3-5 FPS at medium quality and using both cores, the i5's didn't get significantly ( read us-ably, they are faster ) faster, and I doubt the i7's did either.

With H.264 the more cores the better, you get roughly 60-80% speedup per core added. This translates to higher quality encodes at realtime if you start throwing more cores at the encoder.

Does everyone need this? Hell no, but to those of us that could use more cores it would be awe

Yes, we get it, it's not for everyone and there is still a lot of braindead software stuck in 1995 that should be multithreaded (due to the problem it is solving) but isn't, let alone the stuff that is going to be stuck on one thread forever. Meanwhile at least some stuff can use this thing.For a lot of people bucketloads of memory is a better deal than large numbers of cores. For others there is not problem pegging all cores at 100% for days on end.

Memory access is also a shared resource here, so it can be treated as I/O in a way since it requires going through a shared bus. Some local calculation can be done with local instruction/data cache but there is going to be a lot of banging on that bus. Some modern popular languages are really terrible at making effective use of caches (heavily templated stuff for example). That many cores using a typical asynchronous threading model (ie, the stuff people run on PCs) will be a waste of the chip, better to

You saw a speed-up because video and 3D are in a class of problems that are very easy to parallelize [wikipedia.org]. So is decompressing all the images in an HTML document. Laying out the document, on the other hand, isn't so easy to parallelize, if only because every floating box theoretically affects all the boxes that follow it.

BitCoin has ASIC miners with ~10X the mining power per watt than most programmable alternatives such as GPGPU and FPGA. Anything less efficient than that is or soon will become cost-prohibitive to run.

The newer Bitcoin alternatives use memory-bound algorithms to prevent such a steep mining power escalation since memory capacity and bandwidth scale much more slowly than processing power but much more quickly on costs: with Bitcoin, increasing throughput by 10X simply required 10X the processing power but with the memory-bound alternatives, you also need 10X the RAM and 10X the memory bandwidth.

Those Nvidia GPU numbers are outdated for CUDAMiner. There's been a substantial speedup for newer architectures recently - a 780, for example, can typically run substantially over 500 khash. At 250 watt TDP, that puts it in the ballpark of AMD cards for KHash per watt, even though the hardware investment per khash is substantially higher. It means that people who were buying one of the Nvidia cards anyway will still be on the profitable side of things for as long as ATI will be, but you wouldn't want to bui

I just read up QPI on wiki, and it's a point to point processor interconnect, which replaces the front side bus in Xeon and certain desktop platforms - presumably the cores i7. PCIe, OTOH, is a serial computer expansion bus standard, which can take in things like graphics cards, SSDs, network cards and other such peripheral controllers. I just don't see how QPI is any sort of a replacement for PCIe. That would almost be like arguing for PCIe being superseded by USB4 or something.

API is not meant as a replacement for PCI-e. That's just the technology that links multiple processors together (and memory controllers). KNL is essentially the next generation MIC processor. The current generation is KNC which is a separate PCI-e card. I think it is in that sense that QPI replaces PCI-e.