Posted
by
timothyon Wednesday April 05, 2000 @08:13AM
from the one-for-each-cousin dept.

Sun Tzu writes: "Just in case you're wondering what else to do with the mainframe in your basement, here's some useful information to help you prepare a proposal to management." The article is clear and candid, noting things like, "In some discussions the issue of the S/390's 'five 9's' reliability is brought up. However, IBM's 99.999% uptime claim is for clusters of mainframes, not a single system." And no, running 40,000+ virtual Linux boxes is not that practical. Still and all, I wonder how much an S/390 will cost in 3 years ...

Interesting comments so far... Some of us are not MVS system programmers and don't have access to an OS/390 for "play" purposes. Is purchasing used MF hardware feasible? Would the discount be great enough to permit an individual or smaller enterprise to acquire one for development purposes? Or would the MVS software license fee kill this? I know usaed AS/400 machines can be found on e-Bay for a few hundred, prob there is paying for the software license. Even so, one could assemble an AS/400 for a few thousand. Is this possible with mainframes?

In terms of reliability, scalability and the amount of work a mainframe can handle, unix vs os/390 is more like cp/m vs linux.

I have seen many NT and novell servers and even a few unix servers go down over the years but I have never sen a mainframe go down. Also mainframes can not hnadle graphics like a workstation can so running x is out of the question.

The connectivity of the os/390 is superb. All you have to do is type gosysa to connect to system a or gosysb to connect to sysb.

Try doing this on a unix box without a terminal. There are many mainfgrame terminal software packages out there but none for unix besides telnet which lacks quite behind.

You also do not have to worry about users gaining root access wiht mainframe operating systems.

Linux may be a good OS but its no mainframe os. Thats for sure. I would like to hear some opinions from some IT professionals on this who actually have used mainframes.

Strictly speaking they are not different systems (ie hardware) they are different software. The linux runs under VM on a machine that can also run OS/390. In fact you can run mupltiple OS/390 , VM, Linux, and AIX(not sure if this is supported anymore) LPARs on the same machine at the same time.

One important fact is that companies are turning to "Application Service Providers" more and more to outsource their application hosting. This can be anything from websites to ERP systems.

While a medium size corporation might not be able to handle a mainframe (or even a Unix midrange) system, the ASPs certainly can get this expertise. They only need to worry about maintaining one 'frame, and each of their customers can have their own secure Linux partition which is firewalled from the other customers.

This is a way of extending IBM's TCO numbers to even smaller shops. Linux on a mainframe looks like it will be huge in the application hosting/ASP market. --

The connectivity of the os/390 is superb. All you have to do is type gosysa to connect to system a or gosysb to connect to sysb.

Is this like remsh sysa / remsh sysb? What sort of connectivity are we talking about here? Because this doesn't sound much different from the Unix model. I can alias 'go' to remsh or telnet and do the same thing.

Try doing this on a unix box without a terminal. There are many mainfgrame terminal software packages out there but none for unix besides telnet which lacks quite behind.

Granted, on a Unix box without a terminal, you aren't going to be able to type anything. However, I don't see this as a disadvantage since a mainframe without a keyboard is going to have the same problem. So I don't see a real difference. What is it that telnet (or rsh, if you happen to be on a known trusted network) don't do?

I understand that there are significant differences between mainframe hardware and software versus a Unix box. I just don't think you've illustrated particularly relevant differences.

You also do not have to worry about users gaining root access wiht mainframe operating systems.

You could say the same about a Mac, though. Are mainframes not remotely administerable?

You seem to understand much about what is going on with the low-level stuff of VM needed to make all this fly. I have some detailed questions I want to ask you directly. I am trying to convince the powers that be to get VM set up here.

Probably not effective for 24 low end machines, but when it comes to bang for the buck most articles I've read state the S/390 is among the most cost effective form of computing. If you're a very large company and you have a few hundred expensive Sun or HP servers, your numbers might be a little different.

Again, if you work for a large company you've probably got a S/390 sitting and running some pretty critical stuff. This is really targeting those companies who already have a S/390 and have the experience to maintain it or those companies whose server farm has grown too fast to manage.

No operating system runs "natively" on S/390 hardware. OS/390 and VM itself are controlled by the LPAR microcode in the machine itself. This code is primarily an offshoot of VM and allows you to run multiple operating systems at the same time in logical partitions. Linux will run in an LPAR as a primary OS. The overhead in this is really minimal and if you were to dedicate an entire box to Linux you would see almost zero overhead for the LPAR microcode.

More realistically, if your shop has an S/390 with some excess capacity, you can use it now rather than letting it sit idle. So all we need is a piratical distro with an Eject button for the mane OS. once we've got them to install it then we take over the world.

Probably the biggest cost involved in running a mainframe is software licensing. For a large company, this can run into $millions per annum. Running Free software could well make such a system extremely affordable. Many companies are moving systems off mainframes to NT, *nix, as/400. Suddenly they have dozens of systems to backup and maintain - requiring a lot more staff, often on high salaries. The space taken by a dozen or so NT servers is far from cheap. The I/O capabilities of mainframes are awesome - if you deliver a lot of content rather than crunch a lot of numbers, I would say this could be a winner. If you are moving systems off the mainframe (buying *nix based packages is often the reason), the will be plenty of spare capacity to run a linux lpar or two. It could also save you buying new hardware! Old mainframes (at least until now) are not highly sought after and often end up as 'boat anchors'. Not the sort of thing you want at home, but for business, they could be the perfect solution.

The switch back to dailyght saving time in spring already accounts for 1 hour/year . Indeed, mainframes don't store the time in GMT, and the only way to avoid major confusion is to switch them off during the one hour in spring that is double... Or maybe, because that hour really doesn't exist, they just didn't take it into account in their calculation? Or is this "five nines" realiability only for places which don't have daylight savings time?

I personally havn't had much experience with Mainframes but I am told that they are more powerful than the AS/400 mini-computer, at no point did I dis the Mainframe, I just said how cool the AS/400 is and that it's the smaller cousin of the Mainframe which implies that the Mainframe is better...

I'm just wondering if you have any clue at all what you are talking about. I am involved heavily with IBM AS/400's (the mainframes _smaller_ cousin) and quite frankly at some stuff they rule. Databases and batch processing can be made to FLY on these machines. We did a test in IBM's load testing facilities in Italy with a huge machine capable of processing a 18 Tbyte Database performing complex transactions on 75% of the data in only 4 hours. Half way through we yanked the power from an array of disks and it kept merrily chugging, 20mins later, pluged the disks back in and watched it heal it's self.

I believe in the best tool for the job, sometimes it's Unix, sometimes AS/400 and sometimes even windows. It's foolish to rule out a platform when you obviously know nothing about it.

Using the S/390's virtual machine feature to create large numbers of small web server environments, for example, would not be economically advantageous. It would require many 12-CPU mainframes to support the same load as, say, a thousand 256 MB, 600 MHz Pentium III systems. Workloads that divide easily and don't individually require huge resources do not showcase the strengths of the mainframe.

Although he doesn't say it right out, it sure sounds like he is calculating this on the basis of processor performance. It sounds about right for that, but it's ignoring completely the fact that the mainframes strong point is not cpu horsepower, but I/O bandwidth. By coincidence, most web servers bottleneck on I/O bandwidth - without using more than a fraction of the available CPU power of their boxes. There are certainly exceptions to that, but it is the most common situation. For sites in that situation, the mainframe running multiple linux instances would, at the least, compare far better than his analysis suggests.

It's true! When we had gotten rid of our old Mainframe, we had all original equipment whith exception of the dasd (we actually have the dasd from the old system yet sitting in here, inactive) and a couple memory upgrades, we were running on a Mainframe that was 10 years old. VSE doesn't have the memory requirements that OS/390 does and the performance is great! SOOO glad we never listened to the client server or the MVS people...we'd have spent 2 or 3 times what we had on our new mainframe in less time too. Hmm....we don't have a LOW end system but probably something in the middle. With maintenence, I could see us being above 7 digits, but for the actual cost of the machine, I doubt it was that much. More like around 200,000. We spend that much on gaggles of PC's every year. Also, and I forgot to add this yesterday, but much of the new mainframes from IBM are modified PC technology. The SCSI tape drives are channel attached, but they use a Pentium 166 MMX based PC to convert the SCSI stuff to ESCON. Also, UNIX is much different then any mainframe os. You can't compare them they operate and act so different. I guess I won't convince everyone, but mainframes don't have to be expensive. And the size of mainframes are going down not up unlike the PC servers. Our new system with exception of the printer (pretty hard to make that beast smaller), fits in the space our OLD DASD's fit in and we have more disk space to boot! They are talking about bringing in a Oracle based software package to replace some of the stuff on the mainframe they are probably going to order AT LEAST 10 monster servers for that (and it will be more before we are completely done). I/O is THE most important function of a computer period. Mainframe's just pump data in and out. I/O is the reason your PC will choke sometimes when pumping video (don't tell me Linux is perfect here....I/you know better). I mean come on! I LIKE PC's, but there's a huge bottle neck between the bus and CPU since they have yet to invent a bus that can go at gigahert speeds in a PC.

Absolutely incorrect. Some mainframe operating systems (at least VM and VSE) have been handling the time change without an IPL for a number of years (VSE 2 and VM at least 5). Yes, certain subsystems that are sensitive to the time change (CICS and DB2) need to be quiesced, but an IPL is no longer mandatory.

This seems to me to be yet another "hey..look, we ported Linux to the N64 type deal." The value in linux is that it's a good server for cheap hardware, etc. My company uses it on alot of our miscellaneous servers, web servers, etc. - But for alot of stuff we still go with native software for whatever we're using. You could run Linux on an SGI...but why would you want to if you're ILM or someone like that?? Sure you could run Linux on a Sun box - but what's the point?

I maintain that Linux is the best all around OS out there by far. But that doesn't necessarily mean that it's better for everything

Really... how does this apply to any of us at all? I mean like iridum who owns a satellite and like this who own a mainfram let alone 40,000 clustered linux boxes? And even if we did... what in the world would we do with that processing power? I mean hell the average person could work with 133mhz (gotta be able to play mp3's)...

If you can get 40000 virtual Linux instances running on a single S390, does that not offer some interesting possibilities ?

Lots of people run sites using Apache and Linux, with Perl, PHP, databases and other stuff behind it. The software was built on a model that it runs on a single PC, and all visitors to the site share that PC. This is the simplest model, the load balancing stuff and the 'front end web server and back-end database server models' are neat (and essential) refinements. OK, we do have Beowulf clusters (not in my bedroom, though.....). In essence, though, the software was written to run on one solitary PC, and is constrained by this.

Notwithstanding the colossal price, does a machine that allows the creation of 40000+ individual virtual Linux boxes not open up other opportunities ? Plumb one of these into your web-site, and each connection to your server could have the full resources of a virtual Linux box all to themselves. Very exciting.

Right , all we need now is a number of Open Source developers to get an S/390 installed in their garage to do some work on........

It's cheaper. IBM charge you for software on the basis of the machine power the software runs on. This is calculated on a LPAR basis if you partition a machine. So if you split a machine in half and run OS/390 on just one half, you pay less.

It's probably faster. Less overhead with Linux than OS/390

I agree that Websphere or Lotus under USS is the way to go if you have a small volume of data to handle.

I know. I did VM system programming for a couple of years. With VM you don't even need to run them in separate LPARs unless you need the dedicated hardware performance. They'll run V=V, V=F or V=R just fine if they're not being heavily used, i.e. testing, development, etc. Carving up a system into LPARs is basically partitioning your machine into 2 or more separate hardware chunks.

I've seen people refer to both OS/390 (MVS) and VM. These are 2 completely different systems. As different as night and day. I guess I should read the report myself a bit more carefully, however, I would assume running 40,000 copies would apply only to VM and not OS/390.

If anyone shows me an actual real company running Linux on a S/390, AS/400 or any other old school IBM bit of kit in a production environment in the next 5 years I will personally eat an AS/400 9406-720.

Challenge accepted. Post your email address. I'm sure there are plenty of people who would enjoy seeing pictures of you munching server hardware.

I have several ideas for how we can use this at our site. At the moment everyone is busy trying to migrate everything they can find OFF the big iron. These are the people who used to be conservative, but they are caught in some sort of love affair with NT and PCs. The stampede will stop in a year or two.

Let's see your math... I would like to know your S/390 $/MIPS assumptions for hardware, software and maintenance. I would also like to see how you compared S/390 MIPS vs Intel/Alpha/PowerPC. Come on, show us.

You are right, I don't want to count instructions. The reason I stated MIPS is two fold: 1) the pricing model for IBM high end mainframes (HW, SW, Maint.) is solely MIPS (aka MSU) based and 2) I am curious how his 'math' compares the raw processing capability between S390/Alpha/Intel/PPC microprocessor based systems. I want to see the real cost vs workload numbers. I think that workload (eg static HTTP webserving vs secure ecommerce) will be the primary factor in the comparison of performance and cost when deciding which HW platform to choose. Once again, show me the numbers!

I don't think you need to worry, the ghost would be travel so fast I think they would be in front of you. Besides they would probly be going so fast it would be like watching a gerbil on crack drink a case of jolt. They would go insane and attack each other and be going through the walls and such. You wouldn't care if you die it would be fun to watch these blurs fly across the screen. Course with that Speed I think you could finish a board in bout 2.3 seconds give or take a tenth.

It's not an issue of trying to run 40k+ copies of Linux, it's about having a stable, fast, and large platform to run an OS.

Yes and no. The 40k copies is one of those unfortunate statistics that leaks out every now and then, but has no practical value in the real world. The fact that it can run 40000 is irrelevant. However, the fact that is can run many copies is relevant. 40000 wouldn't be practical. 100 concurrent copies, on the other hand, is both practical and useful.

Umm. 99.999% uptime is 1 in 100000 downtime. This is 0.86 seconds per DAY, or more reasonably, about 5 minutes per year. This is imaginable for a permanently manned installation. Once every year or so, you could toleate the the operator having to manually switch off a malfunctioning CPU that had refused to failover gracefully, or remove a dodgy cable that was causing behaviour too unpredictable for the O/S to figure it out, or something of that order.

Please to not be speakink for me and please to be applying the following replacement expressions to your post:

s/any of us/me/g; s/we/I/; s/the average person/me/;

I don't know about you, but I read Slashdot, and I've had to work with mainframes and their care & feeding. (Not anymore, thankee goodness.) But I think you'd be surprised to find the number of IT/IS nerds around here who're happy to hear about something like this, and want to try it.

The nerds that Slashdot is news for are not just basement Linux hobbyists running MP3's on 133Mhz Intel machines.

That's the beauty of VM. When you log onto a VM system, your operating system is "booted up", using the same software programming techniques as if you were booting off the bare iron.

However, there are many facilities that VM provides that the native hardware does not provide. Things like inter-virtual-machine communication, etc. However, if you restrict yourself to only using programming facilities defined in "S/390 Principles of Operation" (the processor manual), then your operating system should, and generally does, run correctly on the bare iron.

As for whether you'd want to do that, in some cases, you can actually obtain a performance improvement by using VM instead of running on the bare iron. For instance, if you are overcommitting memory, you will generally get better performance by giving a flat address space to your virtual machine and letting VM do demand paging then you would by letting the host operating system do its own paging.

Plus, the 40000+ Linuxes in a box was a stunt -- done to show that it could be done. No one in their right mind would *want* to do this in a production environment. Four or five Linux images would be more realistic.

A production VM, a testing VM, and a developmental VM for each of your developers. Your developers can do whatever they want to their own image of Linux. Play with the kernel, crash it five times a day, reload it from a backup in minutes. As your developers develop stable code, they implement it on the testing VM, and the "beta testing" takes place there. Once they are confident, they put their work on the production Linux.

Note that since VM can easily share disk space, you don't have to necessarily FTP anything around. You could just dismount a filesystem from one VM with one command, detach the virtual disk drive from that machine and attach it to another virtual machine, and mount the file system on the other system, all in a few seconds.

You gain a lot by running VM, and many sites do so because the advantages gained easily overwhelm the slight performance hit, if any.

You don't need to use LPAR. In fact, we ran VM native on our 3090 with no LPAR, and ran MVS as a guest. We could have run VM and MVS in their own partitions, but LPAR requires you to dedicate resources, and running MVS under VM allowed all of the resources to be used by either operating system on a demand basis.

Actually, in your example of dedicating an entire box to Linux, there would be no reason to use LPAR. You could switch off LPAR and get a slight performance improvement.

Rumors are that LPAR was an internal strategy within IBM to try and eliminate the need for VM on customer machines, at a time when there was considerable infighting between the MVS group and VM groups. As in the MVS people wanted VM eliminated. LPAR mode is nowhere near as flexible as VM and isn't a substitute. About the most that can be said for LPAR is that it gives you an extremely limited subset of VM's capabilities without your having to have VM.

Something that hasn't been mentioned here is that VM has an interesting configuration option called "V=R", which stands for "virtual (storage) = real (storage) that allows you to run a single virtual machine in real memory starting at address zero, eliminating VM's page tables for that virtual machine!

This can be a huge win if that single virtual machine happens to maintain its own page tables (like VM, MVS, or Linux.) Normally, VM itself has to maintain what are called "shadow tables" -- a set of real page tables that map the virtual page tables in your virtual machine to the real hardware memory pages. Every time your virtual machine changes its page tables, it issues a "purge translate lookaside buffer" instruction. This privileged instruction is intercepted by the CP nucleus, which has to then re-examine your virtual page tables and recreate the shadow tables for the real hardware.

Say you want to give your MVS virtual machine 256 megs of memory. There is a way to tell VM to load its kernel above the 256 meg line, and not use the memory below 256M. Now, when you start your specially-identified MVS guest, because that virtual machine is assigned contiguous memory starting at location zero, VM no longer has to maintain shadow tables for that virtual machine; it can allow the V=R guest to directly manipulate the page table registers when it runs (subject to hardware bounds-checking, of course, to make sure the guest stays in its assigned memory), and that virtual machine can run that much faster.

This is something that isn't even an issue with Linux, because Linux doesn't allow your processes to maintain page tables. There's no notion of virtualizing the paging hardware, and no need for something like a V=R guest.

I don't know if V=R mode works with Linux. I'd be interested to find out, and to compare V=R Linux virtual machine performance with native "bare iron" performance. I bet there isn't much difference.

Another thing to consider is that the S/360, S370, and S/390 processors and operating systems have maintained object code compatability for over 30 years. This drastically simplifies processor upgrades, which is something to consider.

Who said that these cannot be ported. And they are not a problem to be ported even if remaining closed source due to licencing.

The problem with Linux 390 is elsewhere. If you have not noted linux:

1. Does not yet support SNA well, which means it will not integrate with lots of old apps for now. 2. The current port has no network adapter sharing across VMs which means that a network adapter is blown for every VM which is bad.

So it has some way to go yet - someone has to clean up an SNA stack and someone has to get the virtual network support properly done. And these can prove to be problematic contrarily to porting JCL control programs.

In my experience raises are tied to successes more than they are tied to office politics. The guys at the top only understand one thing: money. It's up to the IT department to translate everything it does into that lingua franca correctly and to argue effectively for its initiatives in terms of the organization's bottom line. What usually happens is technical people go with what they know, rather than doing their homework to fairly analyze all options, no matter how unfamiliar. Thus biased, they attempt to justify major capital expenditure to migrate to their favorite platforms and languages. Sometimes they succeed and sometimes they fail. Their raises are tied to how well they did what they said they would do, not if they choose the best approach.

Therefore, I cannot tell you whether or not you will get a raise, but I could make a guess...

Regarding obsolete hardware in the glass house, you seem to be describing an organization has not upgraded a platform and is having problems "right-sizing" it out of existance. Some of those applications must be starved for migration resources for this to be true, n'est pas? It's a pity you can't finish the job and start saving all that money the mainframe's costing you.

BTW, how big was your last raise? Or was this all hypothetical? If so, what was the point you were trying to make?

Actually mainframes are a good development ground for new technology. Large businesses can afford to pay for expensive stuff. Things like copper processors and fiber channels are not uncommon on mainframes. The last time I checked my PC wasn't even using dedicated I/O channels. When the technology becomes less expensive then it generally goes to lower end processors. Simple economics.

>>VMWare can do exactly the same thing. As could SoftPC from Insignia.

Except I believe that VMWare & SoftPC need to accomplish a lot of this through software emulation due to processor limitations. Most of VM is done at the hardware level to avoid the emulation slowdown. And, of course, VM has been doing this for decades.

Actually I'm a 28 year old OS/390 Systems Programmer who doesn't like the Grateful-dead and really dislikes COBOL. I've never even had a beard. I will admit I have a few grey hairs (darned kid).

>>...themselves seem "sexy"...

My wife thinks I'm sexy. So there.

>>...hot new IPOs they think of Linux, NOT IBM.

I would hope the public they don't think of IBM. After all, the I stands for INITIAL and IBM has been a backbone in the stock market for decades. I am curious, when is Linux going to have an IPO? I must be too ignorant because I wasn't even aware that Linux was a company.

>>...shave off those beards, come out from behind the glass wall and start interacting like normal human beings.

As I said, I don't have a beard and my data center doesn't have a glass wall. I guess I deprived. When I want to interact like a normal human being I'll be sure to do it as an Anonymous Coward on a web site.

>>...pretending they are "Scottie" on the Starship Enterprise, and acting like nobody understands computers except them.

I don't pretend I'm scotty. I've always been partial to Mr. Spock. I don't pretend that nobody else understands computers. I just pretend that nobody understands MY computers like me (and most people don't). A subtile difference, but a difference never the less.

>>...centralising...

Unless you work in a small peer-to-peer network, every server is a "central" place. That's the whole definition of "server".

>>..."technical experts"...

I prefer the term "technical specialist" because few people are true experts. Most are like doctors in that they are good in a specific area. Running a linux server doesn't make you a technical expert, just a specialist.

>>...but no, they refuse to go away

As soon as all the work goes away, I can assure you that we won't be around. So far I've seen nothing but large growth in my regions.

>>...refused to create me an account...

If it's obsolete, why do you need an account? Just run the work from your Sun box.

>>What century is this ?

The same century as it's been for the last 99 years. The 20th century. The 21st doesn't start for about 9 months.

>>Forms ?

Those are those pieces of paper you actually write on and place them in the mail because workflow management hasn't been implemented in a reliable fashion at your place of work. This isn't a mainframe issue, but a process issue.

>>Please spare me!!!

Sounds like since you didn't fill out the correct forms, you spared yourself.

>>...Solaris as the OS of choice for high-volume OLAP and OLTP applications...

Most of the worlds data still resides on mainframes. The vast majority of business critical transactions still run inside CICS. When large amounts of money or resources are on the line most large companies still reside on the mainframe because it's too reliable to replace.

>>...the death knell for these anachronistic monsters...

The mainframe has been declared dead more times than the Amiga (and that's a lot). Most of this took place in the 80s & early 90s when midrange & PCs were going to kill it. We're still here. Now with the Internet mainframes are growing again because on the backend nobody cares what the OS is as long as you can get to it through a browser. >>thank you

No, thank you for posting such a biased, well thought out, resarched post.

The author is missing the point. The geek who ran 40k+ copies of Linux at one time was doing it just to be a geek. Period. I seriously doubt there is a person out there who would need to run that many copies of Linux on one box.

IBM builds some pretty sweet hardware when it comes to mainframes. The darn things are build from the ground up to handle mutiple users and OSs smoothly and efficiently. The IBM 3090 we had at college was pretty darn responsive even when both processors were at 90% utilization.

It's not an issue of trying to run 40k+ copies of Linux, it's about having a stable, fast, and large platform to run an OS.--

OS/390 is Unix branded, and gives a reasonably good Unix enviroment, which is well integrated with the other S/390 OS's. For example, I can access a dataset created in ISPF, or setup a job to be restarted with appropriate JCL automatically by zeke, just like any other mainframe job.

I used to use an old 3081 that was rated at about 1 mip. It was much slower than the most pathetic sparcstation I could find with calculations...you could almost hear the bits flip over. It did how ever have a very impressive up time (409968000 seconds at least if not twice that) and it also could move lots of data between devices very quickly. As far as I know the box never had a second of uptime since it was installed.

Of course the management loved ranking machines by MIPS or MIPS/sq ft. The new (at the time) sun 690 (218 mips?) was the best box in the place. It of course did nothing but eat juice till I put usenet on it.

Obviously you wouldn't try and run 40000 copies of seti@home or some such, you wouldn't get enough CPU time each.

You could however use the machine as a file server or a web server without paying any hardware or software costs. This could make a big difference to a company who has a mainframe sitting in the basement. The next time someone wants to get a windoze server for their workgroup, you install samba or apache on the mainframe and skip the $10000+ for hardware and licenses.

Or you just install it temporarily while the permanent boxes are being ordered (which can take 6-12 months for a large enterprise, not kidding).

You don't even need to install Linux, samba and apache already run on OS/390.

This is the real value proposition for large businesses, avoid M$ tax, server room wasted space and yet another box to maintain.

Mainframe prices have dropped a LOT in the last few years. Amdhal has posted prices for some of the low end models at: "http://eshopamdahl.com:80/eshop/millennium/config s.asp". You can pick up a 1 CPU 94 MIPS machine for $240,000. Keep in mind that Mainframe MIPS are not directly comparable to other architectures. The largest model posted is a 5 CPU rated at 371 MIPS. Amdahl is shipping a 16 CPU model rated over 2000 MIPS.

From what I understand, the Linux port doesn't run "Natively" on the raw iron. It would have taken much longer to write if they'd written it that way -- VM hides a few features of the iron from the OS, the ability to do VM's quite simply being one of them. There's been some discussion of hacking the OS around to make it run as the main S/390 OS but most agree that it wouldn't be worth the effort since almost no one would actually do that.

I have to ask, what's wrong with running 40K+ separate servers? I understand that Linux itself is multiuser, but why not both multiuser and multiserver? I had run across this site, www.rhyton.com [rhyton.com] recently and saw that they claim to offer this ability - that you can have your very own virtual server to do as you please - not an account, but a while (though virtual) server. If this is the same kind of stuff, I think the IBM has great potential of being a great ISP's dream come true. Am I wrong here? Do I have my facts wrong? Someone please tell me.

No, Linux runs native on "raw S/390 iron"... it also runs in a S/390 Logical Parition (LPAR) and also under VM. Three operational options... same exact Linux code. Check out http://www10.software.ibm.com/developerworks/opens ource/linux390/ for more details.

1/40,000th of a mainframe might not be very much if each of those 40,000 machines is working flat out at 100% CPU all the time, but it'd be very practical for many purposes with a lower workload for each virtual machine.

For instance, many small business have web sites with a very small amount of traffic. Currently these are often hosted on shared servers with other sites - often very many other sites, since the loading for each site is very low. Out of 40,000 sites you would likely find that less than two hundred are actively being visited at any given time, and even then the server will be mainly IO bound rather than CPU bound.

So the suggestion that a virtual machine on a mainframe could be used for each site rather than just a HTTP1.1 virtual server is actually quite interesting, and certainly viable. It would solve some real-world problems too - security issues in particular.

This is just one example - there are plenty of things you could do with 40,000 virtual machines on one box. The author of this piece either hasn't thought it through or is guilty of the same "parlor trick" that he accuses Scott Courtney of.

And yes, you could do 40,000 on one mainframe - indeed, you can do 40,000 on one Linux box if you have the right setup. I've worked with a server farm running a half-million user homepages off six Compaqs.

So you're calling us clueless because we're talking about mainframes? I mean that's great that you are involved with IBM and get to play with their newest stuff.

Now quit, and join a company who spent the big bucks on getting the big iron, decades ago, because they wanted something big, big, big. Now, convince them to toss those big expensive, contract maintained boxen out next to the dumpster.

Go ahead. Try it.

Now, convince them that you can repurpose that decaying beast that does less & less every year, into a modern powerhouse driven by the Latest and Greatest Buzzword Compliant [tm] Open Source [tm] software.

You try that. Then, tell me which of the above two solutions gets you a raise as an IT/IS professional.

Yes, indeed! The author doesn't seem to understand the value of I/O bandwidth and DASD sharing in loosely-coupled environments. Price/performance and Total Cost of Ownership (TCO) have a lot to do with why the "dinosaurs" are still very much with us.

I've noticed most of these S/390 discussions go round and round again on the same things. To me this indicates different people are speaking up each time and haven't read prior threads. I recommend you folks read those articles and discussions. Here's a link [slashdot.org] to a reply I made to an earlier discussion to help you get started.

You don't need S/390 hardware to try this stuff out. There's a wonderful S/390 emulator for I86 Linux called Hercules that you can run S/390 Linux under. So rather than throw bricks at what you don't understand, try getting your feet wet. Linux on big iron is going to be significant. Start getting ready now.

I'm sorry, but I'm going to have to disagree with the comment that running 40k copies of Linux on a mainframe is not useful. Think of ISP's or other companies that do co-location of servers. How about telco's that do it? The cooling systems, battery backup, backup generators, human maintainers, etc...cost a fortune to operate in a large data center. What if one was able to take an ACRE, yes, an ACRE of servers and replace them with one s/390? The cost of floor space over the course of a year would pay for the mainframe.

You may not need less people to maintain it, but you will certainly need less facilities. I data center I've worked in charges over $20 per square foot per month to host a server. Multiply that by a couple of thousand and that mainframe starts looking VERY attractive...

Part of the vaunted S390 uptime comes not only from the overt reliability of the the hardware but also from the reliability of MVS itself. You don't necessarily get the same reliability from replacing MVS with Linux. Moreover if you consider not only uptime but MTTR <mean time to repair> you will find that in the S390 arena those figures come at the expense of years and years of practiced discipline, at knowing just how to find and fix things quickly and exploiting a well understood break/fix methodology to remediate the problem. This is not necessarily the case with S390 Linux now and would require some years of experience to get there.

It seems that S390 Linux best serves service providers who can bill out chunks of a box to whomever wants a particular service hosted there. If this is the case I'm left wondering why it makes any difference what OS runs under the covers of a hosted service? There are two cases that we have to consider.

One - somehow the service provider can offer a cheaper service because there are zero or near zero OS licencing costs. If look at maintenance, labor, hardware leasing, etc. Is the OS licence a significant enough to lower the overall cost model of a hosted service. True MVS can be expensive but we've already assumed that the expense would be allocated to many customers.

Two - are there applications that are not supported on MVS or AIX but are supported on Linux in this space where a service provider would host a commercial service for a customer? Gee - I can't think of one in the realm of the SAP's, UDB's, Oracle's, Lawson's, Webserver's, etc... Or alternatively are there applications that can be procured from the vendor for significantly less if they are licenced for Linux and not anything else? Perhaps but not likely.

In the end I can't how this makes a great deal of business sense however interesting it is to do technically. Having said that there is one exception - the case of development + migration. I can a see a case being made for developing code in some other Linux platform or in an instance on an LPAR and then more or less easily being able to test and migrate it to production on a similar Linux platform in the LPARs. Today for example when we have to develop something in UDB/DB2 on say NT or Linux on a PC and then have to move that code upstream to an S390 there is a whole basket of problems that you can't avoid. It is possible that S390 Linux would reduce or even eliminate those assuming the DB or applications vendors themselves write more or less unified code for any Linux platform and the developer doesn't have to think about things like IO performance, locking, security and whatnot.

One of my customers is always complaining about the large number of servers in their server room. They point to the 24 Intel, Alpha and RISC/6000 servers and yell about maintenance costs, training costs and even the cost of sheer space. They remember the good old days when they had only 2 servers. They're right, too. 24 servers do take a lot of space and cost a bunch to maintain and operate.

So, when I read about the S/390 version of Linux I started half serious (but half joking, too) to analyse replacing 24 low- to mid-range boxes with one S/390.

Aside from the cost of scrapping the 24 not-so-cheap machines and paying for a very expensive S/390, the maintenance costs are higher for a properly configured, working, redundant S/390 system. Much much higher. At least for now, it's just not a cost-effective proposition.

Some people, invcluding the author of this article have NO clue about true costs involved in a S/390 system. The system itself can cost as much as a mid to high end Sparc box. The processor is at least as powerful, but that's not why you'd want to use it as a web server. Mainframe has KICK BUTT I/O. Mainframe are MUCH more efficient at pumping I/O. You could use a mainframe to stream audio or video on your web site with copy, and a another copy could do the web serving. Also, the main cost of operating a mainframe is NOT just the hardware support, it's the software. We pay one vendor 20,000 a year just to get support and software updates! With Linux under VM, only thing you have to pay for is VM. You can load as many copies of Linux as u want on the mainframe. Also, you could probably use existing bus and tag and ESCON devices such as printers, tape silos and DASD as native devices under Linux. There are mainframe printers that can print 90 plus pages per minute! Granted, some of the mainframe reliability can be attested to software, but on our current system, we have 2 power feeds (you split those between two power substations), we have had RAID for YEARS longer then PC server's have had and we have been serving 2-3 million data requests a day and this was on a 10 year old mainframe, running DOS/VSE, a cousin of the first mainframe OS that MVS was to replace, but is still going strong. Our new one hasn't even scratched the surface of the power we have available. Oh and lest I forget, they can call themselves for service before they die. We have had disk packs go bad and we NEVER went down or knew we had a pack go out. Also, ESCON, a fiber based way to connect channel devices to the mainframe, can have a range of 4-5 MILES before needing a repeater (if one wants to lay that much fiber! Be cheaper to use the OSA ATM or Fast Ethernet adapter and do it with TCP/IP).Um, lessee, scheduled downtime is limited to changing time (we have to do it this way to preserve data integrity), some software updates do need a IPL too, OH and in my opinion, the MF can boot faster too! Gork

From the perspective of a mainframe system programmer, I have to say that the 40,000 linux VM machines never really seemed that useful. 40,000 systems is 40,000 things to maintain and configure, never mind whether they are on separate PCs or in virtual machines

What that original article was really going gaga over was VM. I can understand that - VM is really sweet - but I doubt the configuration would be that useful.

However, I think Linux could be useful to mainframe sites like us. Here's why:

IBM run something called Unix System Services under OS/390. This allows you to have a Unix filesystem on OS/390, TCPIP, and all the open sysem stuff.

IBM have ported Apache (=Websphere), Lotus domino, and a bunch of other stuff to this environment

And we are using them

This is nice for us, but now the drain on our system is forcing us to partition it (well, we anticipated that)

So we are going to have a partitioned system with one slice basically running OS/390 to support Unix System Services, Websphere and Domino.

Very straightforward - except that the overhead of running OS/390 just to support Unix System Services is high.

Therefore I'd say that a probable - no, make that possible - future configuration for us is a partitioned S390 box with one slice running OS/390 and hosting the database and the other running Linux/390 and doing the web serving. Much lower overhead, I'd guess.

(why not do the web serving from a RS/6000? Because our databases and so on are on the OS/390 system and the S390 will allow very fast datasharing, much faster than anything across a network).

I'd love to install it and try it out

BUT there is no way places like ours will make a commitment to Linux/390 without substantial IBM support.