Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

angry tapir writes: Unisys is phasing out its decades-old mainframe processor. The chip is used in some of Unisys' ClearPath flagship mainframes, but the company is moving to Intel's x86 chips in Libra and Dorado servers in the ClearPath line. The aging CMOS chip will be "sunsetted" in Libra servers by the end of August and in the Dorado line by the end of 2015. Dorado 880E and 890E mainframes will use the CMOS chip until the servers are phased out, which is set to happen by the end of 2015.

Wrong.
If Unisys mainframes are anything like IBM s390, then almost everything is written in assembler. So, unless they have a whole hardware translation / emulation layer, you can't just re-compile.
Going to a new processor architecture if everything is written in assembler, its much easier just to throw everything out and re-start.

Speaking as someone who programs and administers computers on the Dorado line, that is total bollox. dreamchaser's post is also inaccurate.

Part of the Exec (= OS) is written in Assembler, the rest is in a proprietary language called Plus (a bit like Pascal) or C.The same applies to processors and libraries provided by Unisys or third parties.User programs can be in Fortran, Cobol, C or Assembler. Pascal and PL/1 were dropped a few years back, use of Plus in non-Unisys-written code is unsupported.

The key part of the article was Both the OSes will execute tasks on Intel's Xeon server chips through a firmware layer that translates the OS code for execution on x86 chips. Existing programs will work without recompilation, it is the Exec which needs to make the accomodations.

I don't know much (ok, anything at all) about the Libre lines but the Dorado machines have some very unusual characteristics such as 9-bit bytes which would render anything other than hardware compatibility a total disaster necessitating a forced conversion to another platform immediately.

I don't know much (ok, anything at all) about the Libre lines but the Dorado machines have some very unusual characteristics such as 9-bit bytes which would render anything other than hardware compatibility a total disaster necessitating a forced conversion to another platform immediately.

Right. Goes back to the multiple-of-9-bit native word length of the the entire 11xx/22xx heritage, back to the Univac 418 [wikipedia.org]. Since bytes aren't the native access mode in that architecture anyway, they're an afterthought and rather harder to code for in assembler.

That's not the only oddity of that architecture, too. 1s complement math? Negative zero?

Yeah. I'm an old grey geek that started out on an 1180 back in the day. Mostly assembler real-time stuff.

I'm a bit misty-eyed at the thought of that heritage code running, essentially, by run-time emulation rather than natively.

36 bit words were very common for awhile, so a 9-bit byte is a somewhat natural extension from that. Such architectures existed as current state of the art models at the same time as the x86 direct ancestor, the 4004 was born.

It seems a bit ironic that some people here seem to think that throwing all the old Unisys software and starting from scratch so that it can be replaced by a chip that has never been able to throw off its old baggage or compatibility shackles. After all today's core i7 processors are

Yes, back in the 70s. Fourty years ago. They have not been commonplace for decades. That's why I was impressed that they still exist and are were apparently still being updated. The x86 of today is almost nothing like a 8008 except for supporting it via microcode translation if someone wants to run some really old code. Chip development is expensive, you can't support it with a tiny niche market forever. At some point it becomes more efficient to just build an

However, 48 bit processors exist today (some DSPs) and even some 36-bit processors (cores on an FPGA).

As for x86 versus 8008, yes they are not much alike whatsoever today. However if you look at all the chips in between, there are only relatively incremental changes between each new architecture in the family and each step includes some degree backwards compatibility with the prior architecture (either directly compatible or indirectly via simple program translation. There's never been a step where Intel

I don't know much (ok, anything at all) about the Libre lines but the Dorado machines have some very unusual characteristics such as 9-bit bytes

Nice to know there's still some non-DSP hardware out there with oddly-sized bytes. Maybe not so nice if you have to develop for it, but it gives me examples to point to when people ask why C's data types are defined the way they are.

The 8-bit bytes is really sort of arbitrary. For quite some time there were many computers that did not have character addressible memory and the focus was on what word size was best. 36 bits words were common for some time, with an 18-bit half word being used on decsystem 10 which could hold a word pointer. So that 36 bits could either be 4 8-bit characters with an extra 4 bits for meta information (often used by garbage collector), or else 4 9-bit characters, 5 7-bit characters, or even 6 6-bit charact

The switch to x86 processors won't affect existing Unisys customers looking to upgrade older mainframes with faster systems. x86 Dorado servers will continue to support the ClearPath OS 2200 operating system, while the Libra line will support the ClearPath MCP operating system. Both the OSes will execute tasks on Intel's Xeon server chips through a firmware layer that translates the OS code for execution on x86 chips.

That's over half a century of the UNIVAC 36-bit architecture, going back to the UNIVAC 1107 [wikipedia.org]. The operating system in use today, originally EXEC 8, later OS 1100, later OS 2200, first ran on the UNIVAC 1108 in 1966.

Some programs from the 1970s will still run today. Some from that era are still being maintained and distributed.

I first became aware of Unisys in the '80s when the Australian TV broadcasters used their stuff for instant replay, drawing annotations over stills, and slow motion. They got to display their "Unisys Computer" logo in the corner. Never actually had to use them professionally though. Looks like the future is becoming homogenous. IBM dropped the specialised AS/400 and System Z CPUs and migrated to POWER; everyone else seems to be dropping specialised CPUs and moving to x86 or POWER as well.

Sperry Univac was one company back then, probably formed by a merger or takeover in the 60's or 70's (I can't be bothered to look up Wackypedia).

What happened in 1986 was that Burroughs' CEO (Michael Blumenthal, previously Secretary for somethingorother in the Carter administration) launched a Leveraged Buyout of Sperry. Sperry fought it but lost, the resulting company was then renamed Unisys and had so much debt from the takeover that it had to divest assets to simply survive. Blumenthal himself did very

I was under the impression that the POWER chipset was specifically designed as a chip realization of the iSeries CPU, actually dating back to when it was still the AS/400, possibly even the System/38.

IBM fielded a desktop mainframe (the Model 9000) that contained a pair of Motorola MC68000 chips, one with a special mask for the System/360 instruction set. The instruction architecture of the MC68000 was a lot like the System/360 instruction architecture, so it was a good fit.

The Z-series and POWER are not quite separate chips. They're separate instruction decoders but they're largely the same pipelines after that. There are some tweaks, but within a generation they share more design than either does with the previous generation of the same processor.

That largely applies to pretty much everything but there are actual cases where Z and P are incredibly different so unifying their back-end implementations will be potentially limiting in the future. Specifically, I am referring to memory coherence model: Z assumes a rigid write-back ordering where P allowed these to be highly out of sync, between cores. Now, I am not sure if any modern P implementations actually exploit this flexibility so the difference might be moot but this decision could be limiting

IBM is deploying system I on power now but they are still making custom z processors for the mainframe, although the mainframe can also have some power and x86 processors they really aren't for mainframe processing just hi RAS local access for offloading certain workloads.
It's going to be interesting what arm does. There are multiple vendors looking at arm based servers with hp already having one out.

That is a (in some circles) popular myth but not reality. IBM still designs and manufacture the Z line that still are compatible with (most) old 360 software.While IBM could use an emulator it would be hard or impossible to keep up the performance running dusty decks and there would be another rather complex emulation layer that would need to be verified while keeping the extreme reliability the customers expect from their mainframes.

What is true is that there is some cross pollination of ideas and building

The Burroughs part is a tagged memory architecture. There is no assembler, a variant of ALGOL is the system programming language. It's a hardware stack machine. Each memory word has tag bits that identify what kind of information is stored. Memory addressing is through segments, which do hardware bounds checking. Check out http://en.wikipedia.org/wiki/Burroughs_large_systems [wikipedia.org] for details.

The hardware and software were designed concurrently. This means that the system is very efficient and not very prone to software errors. Because of the hardware addressing mechanisms and the memory protection bits, this machine was immune to many of the security issues that plague modern CPU architectures. It is near to impossible to break security, because it is enforced by a combination of hardware and software. No current x86/Power/Sparc/??? will ever be as secure as this kind of machine. (The Mill CPU has some of the same characteristics, but lacks tagged memory bits in main memory.)

As a field, computing took a wrong turn when it went after MIPS as a measure of "goodness". Using hardware resources to enforce secure computing address the fundamental problem of writing reliable software. It protects against coding errors and against malicious attacks. Now that hardware is cheap, the additional cost of tag bits in memory or address range checking could be easily supported.

But we're stuck with fast insecure architectures and there seems to be no turning back. It wouldn't be surprising that current systems are in fact less efficient when you take into account the cost of trying to make insecure hardware secure along with the costs associated with software failures and stolen data, corrupted data bases, down time, debugging, etc. (By the way, Burroughs systems had great up times, which was also true of Symbolics Lisp systems, which also had memory tag bits and was programmed from the bottom up in a high level language.)

Both Java and C# are memory-safe and in widespread use. They are not delivering utmost efficiency, though. That is because you cannot allocate complex data types on the stack or as an array of object values.

And they don't have destructors, a very elegant, efficient and safety-improving mechanism.

A problem with Burroughs' machines though is that they kept a tight control over the languages that could be used with it. At my university someone had wanted to write a Lisp system for it and needed to actually get permission from Burroughs first lest their code break other programs on the system. In some sense a part of the security involved only using approved Burrough's tools. because you really could screw up the machine if you used some code that did unusual things (there may not have been an assemb

Uh, I thought this was the descendant of Burroughs B5000? You know, the computer that Alan Kay tells everyone to take a look at to understand how silly today's architectures look in comparison.

That's the other Unisys line; they have an A-series (from Burroughs) and a B-series (from UNIVAC).

I used a B5500, at UC Santa Cruz, in a summer course on computer architecture in 1975, taught by one of its designers. Burroughs donated the obsolete machine, and we stepped it through instructions from its maintenance panel, watching the stack hardware work. We were also taken up to Xerox PARC to meet Alan Kay and see the original prototype Alto machines, years before Steve Jobs did. (They were really Data G

I used a B5500, at UC Santa Cruz, in a summer course on computer architecture in 1975, taught by one of its designers. Burroughs donated the obsolete machine, and we stepped it through instructions from its maintenance panel, watching the stack hardware work. We were also taken up to Xerox PARC to meet Alan Kay and see the original prototype Alto machines, years before Steve Jobs did. (They were really Data General Nova machines inside, with different microcode.)

The Altos were not too Nova-like, but the built-in part of their microcode did implement the Nova instruction set (except that the I/O instructions were used for other stuff). Alan had been previously using Novas in his projects (Smalltalk-72 was first implemented in Nova BASIC and then assembly, for example) and this allowed him to quickly port to the Alto.

x86 was powerful enough for at least 10 years now. It was probably much more due to "business" (customer lockin) reasons, inertia (those folks developing that Unisys CPU) and a lack of a proper binary translator from Unisys.

IBM did some research on binary translation as far back as the 1990s. DEC had the FX32! translator which demonstrated that the concept is quite feasible.

IBM still has the financial resources to develop CPUs for a niche market, because their three niches (mainframe, mini, Unix) are large

So they are looking for Rosetta - the technology Apple acquired for running PPC binaries on the x86 using binary translation.

Well, good luck to them; even though they could just license the technology, they probably won't. The job posting says they are relying on LLVM-IR as a means of translating the code.

In case they care, Apple acquired the company that produced Rosetta, so that's where you want to start to license it, or Facebook last year acquired a small company that did the same type of thing. I doubt they'd be able to hire the engineers away from Google, but if they're interested, Google has NACL and PiNACL which have to use similar techniques.

...and there's a good reason that Avie Tevanian went with "fat binaries" instead of TenDRA style ANDF or IR, and there's a good reason we (at Apple) extended it to Intel systems, rather than continuing on with Rosetta (though, to be fair, there isn't really a technical reason for the death of Classic or Rosetta, other than a broken build and archival process, really).

Well, good luck to them; even though they could just license the technology, they probably won't. The job posting says they are relying on LLVM-IR as a means of translating the code.

Maybe they tried working with Transative in the mid 2000s. Maybe Transative failed. Maybe the Transitive people couldn't do what was asked of them. Maybe their performance numbers didn't work out.

Remember that what Apple is doing is translating binary apps. What Unisys needs done is translating / emulating whole sections of the OS. That is a lot harder.

That actually predates the code actually working reasonably well. I believe Apple also had an exclusivity license for some of the code.

If you count only the BSD system calls, it would have been a small job; if you add the mach, sysctl, fcnt, ioctl, and other multiplex BSD system calls, there was parameter and endian switching work that happened for in excess of about 8000 APIs, and that's not including the Mach message contents diddling that had to take place between the binary application and the native r

...and there's a good reason that Avie Tevanian went with "fat binaries" instead of TenDRA style ANDF or IR, and there's a good reason we (at Apple) extended it to Intel systems, rather than continuing on with Rosetta (though, to be fair, there isn't really a technical reason for the death of Classic or Rosetta, other than a broken build and archival process, really).

DEC got off the ground as the better and more agile alternative to IBM, especially for smaller computers. It really was very innovative. However after some years it really started to look like a full blown clone of IBM, just as bulky and bureaucratic, and blind to the more nimble competition coming in from the sides.

never in three decades working with IBM mainframes have I ever seen bugs in emulation of instruction set, did you have "whippy-shit" programmers who were replying on some of the undocumented or unsupported "combination" instructions?

I've never used any Unix or Linux on IBM (or Amdahl) mainframe, that's a weird use case that I couldn't imagine cost justification. Big iron Unix boxes are now kicking the mainframe's butt in database benchmarks.

The Power was nice (though greatly improved by PowerPC), but AIX was just so strange. It quacked like a Unix duck, it swam like a Unix duck, but it didn't look at all like a Unix duck. It's like they took a good long look at Unix, realized that it was a good thing, but then decided that they would stamp it on every level with the IBM look and feel. Maybe they thought that if it didn't look like it was a product of a committee that their traditional customers would not want it. The AIX machine at one pla

all the big Unix iron vendors have done that, solaris is weird with startup control system, hpux has all kinds of partner added things like a basic veritas volume manager......which is why I get nauseous hearing about systemd for Linux and everything and the kitchen sink being thrown into it (the Perl 6 of startup systems)

The Unisys systems are ones-complement, 36-bit systems, with overlay managers for their banks of memory; the Univac side of the house still supports running 'lost-deck' code from the 1950s. As in, the executable exists, but the source code was lost decades ago. So there is NO way to 'just recompile'.

... to me was that Unisys was still selling computer systems. The only time I thought about the company in recent years was when dealing with their help desk software package. Prior to that my last contact with the company was having to use an aging 110x mainframe that was running EXEC-something. A horrible user interface, BTW. It seemed to be designed to make using the system a major pain in the butt. I was so happy when a co-worker pointed out that I could move my code onto the PDP-11 and actually get som

I played in a band with a guy who sold some significant patents to Unisys for their mainframes and worked in the CTO office. Smart guy, actually worked as a pro musician for a while in the 70s with David Bromberg (session player for the hippie giants like Joan Baez). 6 or 7 years ago, I was interviewing a guy applying to be my manager. He also worked at Unisys, and so I asked him if he knew the guy with the IP, and he got a funny look on his face - he did know him, and said there was antagonism in that rela

A lot of the 1960's big iron had 9 bit bytes - eg the venerable DEC-10. They need 36 bits to represent the range of data they wanted (typically floating point). Those 36 bits could only be divided according to 36 = 2x2x3x3 (ie chunks of 1, 2, 3, 4, 6, 9, 12, 18). Some used 6 bits to represent a character in a very limited character set, some used 9 bits to a character (not all bit values necessarily corresponded to a printable character). The big change to 8 bit bytes came from the IBM 360 when it choose 32