Posted
by
Soulskill
on Friday December 11, 2015 @03:54PM
from the built-out-of-matchsticks-and-the-brains-of-cockroaches dept.

An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."

No; CPUs didn't/have/ to do that. MIPS toyed with both models for a while - initially MIPS was like "we don't interlock pipeline stages, so programmers need to be smart." Then the R4000 came out that attempted to implement that, and it was.. complicated. So it got reverted.

Not all CPUs are like Intel CPUs (which aren't all like earlier intel cpus, which aren't all like 8080s, etc..)

Intel CPU designs go to great lengths to look very much like earlier Intel CPUs - even if the internals are very different, they are still backwards-compatable with earlier code dating right back to the 80386.

I do wonder what they could achieve if they were to abandon backwards compatibility and just ask people recompile their old code. Probably lots of ARM sales.

"it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next"
Doesn't this go without saying?

Back in the day, pipelining - issuing, say, a new multiply instruction every clock, even though several earlier multiplies were still working their way thru the pipeline - was too expensive for most architectures. An instruction might take multiple clock cycles to execute, but in most architectures the multi-clock instruction would tie up the functional unit until the computation was done - you might be able to issue a new multiply every 10 clocks or something. Pipelining takes more gates and more design because you don't want one slow stage to determine the clock rate of the whole design.

Which leads us to the early RISC computers, I can recall an early Sun SPARC architecture that lacked a hardware integer multiply instruction. The idea at the time was every instruction should take one clock, and any instruction that demanded too long a clock should be unrolled in software. So this version of SPARC used shifts and adds to do a multiply. At the time, that was a pure RISC design. One of the key insights in RISC, still useful today, is to separate main memory access from other computations.

The CPU design course I took in the late 80's said Seymour Cray invented that idea of separating loads and stores from computation, because even then, even with static RAM as main memory, accessing main memory was slower than accessing registers. So by separating loading from RAM into registers and storing from registers into RAM, the compiler could pre-schedule loads and stores such that they would not stall functional units. Cray also invented pipelining, another key feature in most modern CPUs (I'm not sure when ARM adopted pipelining, but i'm pretty sure it's in some ARM architectures have it now). Of course Cray had vector registers and the consequent vector SIMD hardware.

I don't think Cray invented out of order execution, but I don't think he needed it; in Cray architectures, it would be the compiler's job to order instructions to prevent stalls. In CISC architectures, OOO is mostly a trick for preventing stalls without the compiler needing to worry about it (also, with many models and versions of the Intel instruction architecture out there, it would be painful to have to compile differently for each and every model). So, for example, the load part of an instruction could be scheduled early enough that the data would be in a register by the time the rest of the instruction needed it.

Anyway, the upshot is modern CPU designs have a bigger debt to Cray than to any other single design.

It MIGHT have been a system that would unconditionally execute the next instruction and produce garbage if there was a dependency, leaving the compiler to order them correctly and insert NOPs where needed, but in fact it kept track of that in hardware and inserted the NOPs itself.

"Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time."

Oh god, this isn't even remotely correct.

For one, a similar design was used by Cray's earlier machine, the CC 6600, which also had 10 PPAs. And by the 7600, and the 8600. For another, there were dozens of machines with similar designs that predate this, including PEPE and the ILLIAC IV, both of which had hundreds of un

I think the first is what we would call the pipeline [wikipedia.org] today and the second means parallel execution units. Using the word "functional units" for both is a bit confusing. Early RISC pipelines had 5 stages that are described in that link (and that brings back some memories - I remember studying it in college).

Funny thing is, I actually read the article to learn about what my first girlfriend's dad did - he was an engineer that worked on that thing (and yeah, she was a total nerd girl). I'm still Facebook friends with her, should point her to the article.

What's a bit confusing is that the article is about Seymour Cray, the creator of super computers, but not about Cray computers. What is described is actually the Control Data computers, starting with the 6600 (I had the joy to learn with a 175).

Yes, the PPs are _not_ "parallel processors", they are "peripheral processors". The 175 had 12 "peripheral processors" with quite limited capabilities and running at 12 MHz instead of the 40 of the main computer, and exclusively responsible ofr handling I/O.

Nowhere near that. From Cray's (admittedly only distantly related to the original Cray...) web site, http://www.cray.com/company/history

The first Cray®-1 system was installed at Los Alamos National Laboratory in 1976
and cost $8.8 million. It boasted a world-record speed of 160 million floating-point
operations per second (160 megaflops) and an 8 MB (1 million word) main memory.

The largest X-MP had 4 CPUs, each with a floating-point adder and multiplier and a clock speed of ~105MHz. So, the peak performance of these machines was 840MFlops. Achieving and sustaining that though was tricky and was only possible in large vector operations.

The impressive part of the architecture was its memory: at peak, the memory subsystem could complete 16 memory references per clock cycle (each delivering 64 bits of data), so the peak memory bandwidth was 13GBps, or roughly what a 64-bit DDR3 soluti

Amdahl worked mostly on IBM mainframe clones, and focused on business applications. More emphasis on reliability, and processing currency, integers (counts), and business logic. Example: payroll for a big corporation.

Cray's machines were mostly used for scientific, engineering, research, and military applications. More emphasis on floating point number processing. Example: climate simulations.

This is an error from the original article, not from the summary. If the author didn't even bother to look up what "PP" actually stood for, I don't have a lot of confidence in the rest of the article's scholarship. Heck, ONE CLICK TO WIKIPEDIA would have given her the proper definition.

I came here to say this. In the early 80's I worked on Control Data Cyber 174C mainframes (we had two). Liquid cooled, about maybe 20 feet long with hinged chassis that swung out like doors (maybe 40" by 6' and about 10" thick) . One chassis was a CPU, two were memory I think, and one was for 10+ Peripheral Processor Units (PPUs) which did 100% of the I/O. A whopping 40 MHz! and a 208 bit memory bus with SECDED.

I interned (sort of) at Babcock and Wilcox's computing center around 1980. We had several CDC systems, including a 76 ("7600"), which was built in a horseshoe arrangement much like the Cray-1. (The field engineers used its interior as a storage closet.) Me, I was just hauling tapes, card decks and printouts, but I did get to learn a bit about the machines, and a lot more once I got into comp architecture classes in college. It was a great place for a geek.

I worked with the CDC 6400 at the University of Colorado. It had an Extended Core Storage unit, a separate cabinet full of magnetic cord memory (they came up to 2 M words; I don't know how large CU's was). We gave machine room tours, including opening the door on the ECS, until somebody noticed that the machine sometimes crashed when the door was shut. Don't want to jiggle those cores, I guess. Just found this: http://www.corememoryshield.com/report.html

Right on. I worked at Control Data, as a CPU logic designer. The PP's were peripheral processors. The article is full of so much egregiously incorrect tripe I won't even bother to type up a correction. My advice to everyone is to completely ignore the article unless you want your head stuffed full of misinformation.

The multiple functional units idea wasn't new with Cray's supercomputer. He was doing much the same thing in the computers he designed for Control Data Corporation (CDC).

Also, it seems astonishing that there would be no copies of that Cray software around, anywhere, other than on an old disk pack. There are still copies of the software for those CDC machines. Maybe that's because there were so many of them -- relative to the Crays.