Posted
by
Soulskill
on Friday December 02, 2011 @01:45PM
from the not-quite-over-the-hill dept.

riverat1 writes "After AT&T dropped the Multics project in March of 1969, Ken Thompson and Dennis Ritchie of Bell Labs continued to work on the project, through a combination of discarded equipment and subterfuge, eventually writing the first programming manual for System I in November 1971. A paper published in 1974 in the Communications of the ACM on Unix brought a flurry of requests for copies. Since AT&T was restricted from selling products not directly related to telephones or telecommunications, they released it to anyone who asked for a nominal license fee. At conferences they displayed the policy on a slide saying, 'No advertising, no support, no bug fixes, payment in advance.' From that grew an ecosystem of users supporting users much like the Linux community. The rest is history."

Just before 2038, there will be tons of hype about "The End of the Epoch!", just like "Happy New Year 2000! Nothing works anymore!" Plenty of work for onery, old C programmers like me, with lawns to get off of.

After 2038, when everything is still working despite dire predictions, we will have to wait a bit for the next opportunity, when the 64 bit epoch runs out . . .

After 2038, when everything is still working despite dire predictions, we will have to wait a bit for the next opportunity, when the 64 bit epoch runs out . ..

64-bit Unix time will run out on December 4, precisely at 3:30:08 PM, 292,277,026,596 AD. It will be a Sunday.

By then I fully expect computers will already have migrated well into the gigabytes-per-machine-word range, or will no longer be using bits as we know them. Either that, or we'll have encountered the heat death of the universe, so it will be irrelevant.

Lets hope not, the heat death of the universe isn't supposed to be for another
99,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,986,250,000,000 years or so, give or take a few...hundred billion or so.

System I, I think System II got the "versions", then they "jumped" to System III, although many people allready had sidejumped to Berkeley.But indeed it started with System, not versions, (but the "versions" made it popular:-))I found particularly interesting the "programmers work bench" http://en.wikipedia.org/wiki/PWB/UNIX [wikipedia.org] wich had all kinds of cool programming and text processing tools:-)

And DG/UX, Reliant UNIX, Risc/os, SINIX, Unicos, Dynix, and about twenty other moderately successful moderate 90's UNIX systems. If you look closely, it's only showing systems that are either still alive or ancestors of systems that are still alive.

And they also missed a line from SunOS 4 to SVR4; they showed it as a line from 4.3BSD instead. The SVR4 VM system and VFS layer came from SunOS 4.x, and the dynamic-linking mechanism, although changed a bit with the switch from a.out to ELF, also came from SunOS 4.x. Most of the BSDisms added to SVR4 also came from SunOS 4.x rather than directly from 4.3BSD.

I have several unopened sets of Irix 6.x buried somewhere in a box in my basement. Alas, I've nothing to run them on. As I recall, Irix came complete with lots of utilities, but the C complier was crippled, unless you paid extra, PPP was crippled unless you paid extra, etc. etc.

Congratulations, you are the blind man holding the trunk of the elephant. Those other blind men you are responding to are holding a leg or a tail or an ear.

Actually, my eyes work quite well, and can see that there are a pile of different UN*X systems out there, which are similar in a lot of ways (a lot of core APIs, for example) and different in a lot of ways (system administration quirks, for example). If you focus on the parts that are most different, they're all weird in their own ways, with Mac OS X not necessarily being any worse than others (hey, at least ifconfig works the way it's supposed to); if you focus on the parts that are most similar, they're all close enough to call them UN*X.

"Unix", as in "the registered trademark "Unix"", is separate from "Linux", meaning either the Linux kernel or the set (equivalence class?) of Linux distributions. To be legally eligible to be called a "Unix", an OS has to pass the Single UNIX Specification test suite; as far as I know, nobody's run any Linux distributions through that test suite.

However, Linux is most definitely a "Un*x", in that its API is a UNIX-derived API, even if it might not be able to check every single checkbox for the Single UNIX

I don't think there's an OS sold today whose name is just "Unix", so "[using] Unix" presumably means "using an operating system that has been certified as following the Single UNIX Specification"; which particular such OSes do you use every day?

Linux is better, because it has capabilities.

If by "capabilities" you mean stuff such as "POSIX capabilities" that means you don't just have "normal privilege" and "root privilege", Solaris (which is an operating system that has been certified as following the Single UNIX Specification) has them as well.

It still has the horribly limited and antique unix-style file/folder structures, though - three octal modes

As opposed to four hexadecimal modes?

Linux, and several other UN*Xes, also support access control lists, at least on some file systems, if that's what you're contrasting with "three octal modes". (Adding a "delete" privilege and a set of permission bits for what amounts to root, to get the four hexadecimal modes, isn't a huge change.)

I used to write systems and apps code on 64-bit computers with graphical UIs, language and context-sensitive IDEs, no root superuser, and automatic versioning filesystems. But that was back in the 1980s, before the black ships came and the secret of hose gartering that never ravels was forgotten.

So what 64-bit computers were those? (Presumably that means "64-bit address space"; "does 64-bit arithmetic in one instruction" is a lot less interesting.) System/38 and AS/400 had larger address spaces, but I think they had 48-bit address spaces until the PowerPC update. (I'm not talking about the 128-bit pointers, I'm talking about the address space available to the instructions that are executed by the machine rather than to the instructions translated into machine instructions.)

I miss VMS vaxen sometimes.

(Those, of course, weren't the 64-bit computers to which you were referring. OpenVMS Alphas were, but those didn't come out until the 1990's, and OpenVMS Itaniums are, but they didn't come out until even later. In any case, you've been talking about OS features, so VAX is irrelevant.)

All were similar in concept, but had their own ways of doing things. As this branched away from a common path, most groups agreed on a common set of rules, known as POSIX.

Once you've learned how one Unix-like environment works, you can use them all. You will find that a Linux server, an Android phone, a TiVo DVR, and even an Apple desktop, all operate in very similar ways, although each has its quirks.

The outstanding rogue operating system now is Windows. They too have recognized that they are missing out by remaining completely non-compliant, and have begun incorporating various aspects of POSIX as add-on (SFU or SUA) and 3rd party (Cygwin) packages.

The chart you displayed should have had the "Unix" name divided between major and minor groups. Major being operating systems such as Linux. Minor elements combined in as "Other Unix" and "Other OS". In that, "Windows" having such a minor share, should have only been labeled "Other OS".

In November 1993, Cray, Inc accounted for 40% all systems in the graph, and the largest share of the "Unix" segment. It would have been a mixture of UNICOS, COS, and Solaris. "Unix" as a specific OS only accounted for 15%. Even those were simply the OS name provided for the list, as an indication of a Unix-like operating system, not that it was actually "Unix".

Now get back out there, and don't make me hit you with a wrench again.

I'm curious how much recognizably-AT&T-derived code is in the current commercial UNIXes; probably more than in Linux distributions, but it might not be as much more than people think. UNIX's legacy is more the APIs and command-line interface than the actual code, and Linux has that stuff.

Linux is Unix. Even if it's not certified as such. If it walks like a duck, quacks like a duck, etc. People started using Linux in the first place because they wanted "a Unix" for personal use. Linux is just a clone of Unix. In the end, it's not really all that different from "Unix proper" than the various flavors of licensed Unix are from each other. I'd argue that most Linux systems are a good deal closer to, say, Solaris, than OS X is... an officially certified Unix.

No kidding. The whole "*nix" descriptor came about because there were operating systems that were actually licensed variants of Unix, and other systems that were Unix-like, but legally could not call themselves Unix. Unix vs. Unix-like was not a technical description, but rather a legal one. Since Linux supports pretty much all the major features found in actual Unix-based systems, for all intents and purposes it is a Unix variant, even if it is a rewrite.

The top500 is only an indicator of a specific use. And I suspect that Linux is used because of specific advantages like cost and flexibility. The ability to run Intel/AMD in
combination with GPUs is a lot cheaper than hardware from a single maker. Also the Linux kernel can be tweaked for HPC. It is harder with commercial Unix. For the most part Unix is still being used for things like big iron.

I remember the first time I saw Unix, in 1976. The first step in installing it was to compile the C compiler (supplied IIRC in PDP-11 assembler) and then compile the kernal, and then the shell and all the utilities. You had an option as to whether you wanted to put the man pages online since they took up a significant (in those days) amount of disk space. Make was not yet released by AT&T so this was all done either by typing at the command line or (once the shell was running) from shell scripts.

I remember the first time I saw Unix, in 1976. The first step in installing it was to compile the C compiler (supplied IIRC in PDP-11 assembler)

As I remember, and as the "SETTING UP UNIX - Sixth Edition" document says (see the start *roff document in this V6 documentation tarball [tuhs.org] - yes, I know, tarballs are an anachronism here:-)), V6 came in a binary distribution that you read from a 9-track tape onto a disk:

If you are set up to do it, it might be a good idea immediately to make a copy of the disk or tape to guard against disaster. The tape contains 12100 512-byte records followed by a single file mark; only the first 4000 512-byte blocks on the disk are significant.

The system as distributed corresponds to three fairly full RK packs. The first contains the binary version of all programs, and the source for the operating system itself; the second contains all remaining source programs; the third contains manuals intended to be printed using the formatting programs roff or nroff. The `binary' disk is enough to run the system, but you will almost certainly want to modify some source programs.

You didn't have to recompile anything (at least not if you had more than 64KB; I had to do some hackery with the assembler to get it to run on a 64KB machine, as there wasn't enough memory to run the C compiler - I had to stub out the pipe code with an assembler-language replacement for pipe.c, and then recompile the kernel with a smaller buffer cache and the regular pipe code). Most users probably either had to or had good reasons to recompile the kernel (different peripherals, more memory for the buffer cache - or less memory in my case, so I had to shrink it from 8 whole disk blocks to 6 - etc.), and if you weren't in the US eastern time zone or didn't have daylight savings time you had to change ctime.c, or whatever it was called, in the C library for your time zone, recompile the C library, and then rebuild all utilities with the new C library (no Olson code and database, no shared libraries, no environment variables so no TZ environment variable).

Since AT&T was restricted from selling products not directly related to telephones or telecommunications, they released it to anyone who asked for a nominal license fee.

It's interesting how AT&T couldn't support it for this reason, because today, UNIX is at the heart of both iOS and Android, which run some of today's most popular telephones.

Also at the heart of OS X. One of the smartest moves by Apple and Jobs, replacing the hideous old Mac OS with something built on Mach and borrowing heavily from BSD. Apple made the painful leap and it paid off handsomely.

It's a kernel that consists of Mach plus BSD code plus IOKit, with the BSD code modified to let Mach handle platform-specific stuff, the lower levels of process/thread management, and paging and let IOKit handle talking to hardware. Except when doing stuff such as sending Mach messages, userland talks to the BSD code for system calls - process management, address-space management, file system operations, and network operations all involve system calls to the BSD layer even if the system call in question ma

UNIX systems generally have a good, though not impeccable, record for software reliability. The typical period between software crashes (depending somewhat on how much tinkering with the system has been going on recently) is well over a fortnight of continuous operation.

(The term "fortnight" is not widely used in the U.S., so I'll clarify that a fortnight is two weeks.)

Cardinal sin of replying to myself, but this one is too good not to post (in the spirit of the apochryphal "640K should be enough for anyone"). From page 1962:

...most installations do not use groups at all (all users are in the same group), and even those that do would be happy to have more possible user IDs and fewer group-IDs. (Older versions of the system had only 256 of each; the current system has 65536, however, which should be enough.)

The whole talk is really excellent, and there's this theme in it that the really great things come from some unexpected places, by the compounding of seemingly unrelated character traits, work habits and organization dynamics.

At the end in the Q&A, Hamming gets into a short discussion with the host Alan Chynoweth about the origins of UNIX, evincing from Alan a favorite quote:

"UNIX was never a deliverable!"

expanded:

"Hamming: First let me respond to Alan Chynoweth about computing. I [was in charge of] computing in research and for 10 years I kept telling my management, ``Get that !&@#% machine out of research. We are being forced to run problems all the time. We can't do research because we're too busy operating and running the computing machines.'' Finally the message got through. They were going to move computing out of research to someplace else. I was persona non grata to say the least and I was surprised that people didn't kick my shins because everybody was having their toy taken away from them. I went in to Ed David's office and said, ``Look Ed, you've got to give your researchers a machine. If you give them a great big machine, we'll be back in the same trouble we were before, so busy keeping it going we can't think. Give them the smallest machine you can because they are very able people. They will learn how to do things on a small machine instead of mass computing.'' As far as I'm concerned, that's how UNIX arose. We gave them a moderately small machine and they decided to make it do great things. They had to come up with a system to do it on. It is called UNIX!

A. G. Chynoweth: I just have to pick up on that one. In our present environment, Dick, while we wrestle with some of the red tape attributed to, or required by, the regulators, there is one quote that one exasperated AVP came up with and I've used it over and over again. He growled that, ``UNIX was never a deliverable!''"

Makes me wonder whether or not we'd be using as many Windows machines had the government allowed AT&T to sell and market Unix.

No. Windows got ahead because it was designed primarily as a platform for running high level applications, such as word processors and spreadsheets, by single users on microcomputers rather than being designed as a multi-user, general purpose platform for programmers and other users who could invest a little more time in learning their way around the operating system. Also, Windows was backwards compatible with an operating system (DOS) which ran on older computers that did not have the hardware resources

Windows? Windows didn't even exist back then. The competitor on the low end was CP/M. A few years later MS introduced their CP/M clone, DOS. Windows came about a decade later. No one used UNIX on personal computers because it was only lightweight by mainframe / minicomputer standards. Most personal computer didn't have protected memory and multitasking was a completely pointless operating system feature on a system that barely had enough RAM for one program.

Ironically MicroSoft's first saleable OS was a flavor of UNIX called Xenix. But Xenix on 80286's was really lame compared to UNIX on a PDP-11 or VAX. UNIX wasnt really that efficient on a PC until the 80486s in the mid-1990s. That was fortunately the same time Linus started his version. MicroSoft sold Xenix to SCO after it developed MS-DOS. SCO patent-trolled it unsuccessfully for many years.

According to a friend of mine (who had a single-digit Unix license #), AT&T originally refused to release UNIX on the advice of their lawyers because the anti-trust agreement prevented them from getting into non-phone markets. The universities who wanted access to the, then fledgling, OS then sued them over a clause that prevented AT&T from suppressing technology. The universities won that battle.

So (after probably sticking their tongue out at the lawyers who originally nixed the release) they released UNIX... and were then sued by other computer companies for violating the "phones only" clause of the anti-trust agreement. AT&T also lost that battle.

So now it was law. They couldn't suppress the technology, but they couldn't market or support it because it wasn't directly phone- related. That's where they came up with the rather convoluted system where, for a nominal price ($1 for universities, and more ($20K, I think for companies), and signing a non-disclosure agreement, anybody could get a mag tape with a working system, and source code, a pat on the back and a 'good luck'.

ALL support was done by users (who, pretty early on got better at it than any company would have been) -- but the non-disclosure agreement meant that you couldn't just post a file with the fixed code in it... so that's where diff(1) patches came into play -- they exposed the fix without exposing too much of the source code. In some cases where patches were extensive, the originator of the patch would simply announce it and require people to fax a copy of the first page of their license before being emailed the fix.

AT&T was also rather pedantic about protecting their trademark, which resulted in people often using the UN*X moniker rather than include the trademark footnote at the end of their postings.

Seems like this sort of story always brings out the low number/.'ers. I remember one post in the last few years where each reply was by a lower post until someone showed up with a number under 1000. (If I remember right, lol. Memory is not my strong suit now. And the older I get, the less I can about that. lol)
While this was all happening, I was changing vacuum tubes in military crypto boxes. lol Hell, I remember my dad testing our TV's vacuum tubes at the A&P grocery store.

The article is well written but I am not sure they have checked their facts... here is a direct quote from the article....
"It even runs some supercomputers."
Now... just head over to the TOP500 page (http://i.top500.org/stats) and sort by OS..... I wouldn't call > 80 % just 'some supercomputers'
???

Well, if you read the Microsoft EULA, you'll notice that they don't promise bug fixes either. It just isn't advertised that way (although they definitely do supply advertising)... and sometimes the support just consists of "yes, I think that's unfortunate, too".

The reason Windows, Mac OS and pretty much all consumer and small business OSs became successes is because they were cheap. DOS and Windows, in particular, became dominant because of the OEM ecosystem. Support and bugfixes? Microsoft support has always been expensive, and bugfixes for the operating system didn't even become widely distributed until Windows vulnerabilities reached a level where Microsoft was essentially forced to come up with Windows Updates to dole out its bugfixes in a much easier way. When I first started out administering Windows NT based systems, bugfixes only came regularly with service packs, or if you installed them based on advice from Microsoft directly or via KB articles, or because some guy on randomtechforum.com told you "yeah, KB28342818122 will fix your problem." And earlier versions of Windows sure the hell didn't even have that level of support. Windows 95 or Windows 3.1 were what they were and about the only way you would get updates is if it was shipped with some piece of software that needed to update a DLL or other support file.

It little or nothing to do with support. Until Linux came along and basic took the expensive licensing and support costs associated with most *nix operating systems, *nix vendors didn't even give a shit about the PC market, and regarded PCs as glorified terminals when and where they had to connect to *nix-based systems. Still, even on the old Xenix system I administered, there were updates available, the last one I remember installing around 1992 or 1993 was a patch to fix hard-coded originator host names in UUCP bangpaths (and if that doesn't date me, nothing does).

Is Java an insane higher level language? What about Eclipse, which works well with a whole range of high AND low level languages?

There just isn't any programs available.

I find that most of my needs are met. In fact, a lot of the programs I use on Windows were ported from Linux. The only piece of software I pay for (a developers merge tool) had it's origins on Windows, but they sell a Linux port - presumably in recognition of the fact that so many professionals find Linux machines productive.

If you want to do C#, Monodevelop is available, although was distinctly inferior to it's Windows progenitor, SharpDevelop, the last I looked. But that's also true of Mono itself, IMHO. Aristeer is written in C#, so in principle there's no reason it couldn't be run on Mono / Linux, unless it uses some of the features that Mono hasn't caught up with yet.

photoshop can run in wine , there is a paintshop portable app which works perfectly in wine or on windows. It's not legal of course. Although there is no reason you couldn't buy a legal version with a licence....

Set your sights a little lower. This is a monopoly product you are talking about. If you try to talk nonsense, all of use that are forced to use it by corporate overlords or have to fix computers for relatives are going to know that you're full of it.

I dual boot Ubuntu and Windows 7. Ubuntu boots so fast that if I am not paying attention it is up before I realize it and log in is just as responsive. Windows 7 on the hand takes about double the time to the log in screen, and then there is a wait at least as long for the machine to become responsive enough to use.

Granted, this on a machine with only 2 GB of RAM. But running the same applications on each OS presents a world of difference (Remote Support Client [supports both OSes], Lotus Notes 8.5 full

Yes, the camel surely looks elegant in the desert. But then again, fish don't climb trees.

Just because something works well in one area doesn't mean that it will function well outside of that area. This is why there will always be "other methods" for operating systems.

Windows is such an incredibly fragile system - all eggs in one basket. While it made sense for mass sall of PCs with a single disk, by feat it left the programs, work, operating system, registry, swap space, all on one disk. You can choose to save your work done in various suites on other drives, but they are still fooling around with Drive C:, D:, E: etc. If I need to reinstall the OS I end up with such a massive corruption of drivers I'm almost better off starting from scratch, but I'd lose all my installed programs, because Microsoft likes to keep them all in Program Files on the C: drive, where the OS resides. I can move my memory swap to another physical drive, to relieve some I/O burden, but it's not well known how to do this. Having application, operating system files, swap file and work files all on one disk is such a horrible idea, particularly without even the benefit of partitions (to protect some files or installed applications during a re-install)

I configured my first Linux box to have a tidy spot for the OS and its sources, not too much bigger than necessary (safety factor of 2). Put swap file on its own partition and installed all applications on a separate physical drive, with workspace for each on separate partitions. Flexible. I can change my harddisk configuration with a minimum of fuss. Try that with Windows.

I have no doubt you can change your configuration, but you clearly spent too much time deciding how to layout your first Linux box.

I was sys admin on a few mainframes before Linux even existed. Picked up the wisdom of how those systems were configured and why. When I got my distro disks I spent about 30 minutes working out how I wanted it configured. Really wasn't something most newbs would understand, even with *ix builds/installs. Certainly not in any way close to the default, out of the box set up for Windows on any retailed or business configured PC I've ever seen - I'd wager 99.9% of all Windows PCs, to the present have massiv