James Hague: "But all the little bits of complexity, all those cases where indecision caused one option that probably wasn't even needed in the first place to be replaced by two options, all those bad choices that were never remedied for fear of someone somewhere having to change a line of code... They slowly accreted until it all got out of control, and we got comfortable with systems that were impossible to understand." Counterpoint by John Cook: "Some of the growth in complexity is understandable. It's a lot easier to maintain an orthogonal design when your software isn't being used. Software that gets used becomes less orthogonal and develops diagonal shortcuts." If there's ever been a system in dire need of a complete redesign, it's UNIX and its derivatives. A mess doesn't even begin to describe it (for those already frantically reaching for the comment button, note that this applies to other systems as well).

What I'm really saying is that Unix years ago isn't remotely similar to Linux today in terms of new user friendliness - bad choice to use the word "is", don'tcha think?

No, not really. Sometimes, to access a file in the same directory I'm in, I have to do './filename' (or is it /.filename? I can't remember). Some of the most important files in the system are in a directory named etc. Do you know what 'etc' means?

Why the f**k would you put a lot of critical files in a directory that means 'and other stuff' ?

The default text editor for crontab on the systems I have to use is still vi, which is one of the most user-UNfriendly pieces of shit ever written. Hard drives are named 'hda' in the file system. And I could go on and on.

I suppose many Unix gurus would argue that the pain of learning such an ass-backwards an incomprehensible system such as Unix is a rite of passage for enjoying its power. And I also understand that a lot of its eccentricities can be understood if you ever learn what a developer was thinking back in 1970-ish when all of this was being put together. I'm just saying that in 2012, we should be able to do better than this.

OK,you've convinced me - you really have tried to avoid Unix all these years, haven't you? ;-)

You don't need a preceding ./ to access a local file; the filename does nicely. On ALL modern operating systems of significance, including Windows, "." refers to the current working directory (a most useful concept they borrowed from Unix). So if you have a file in the current directory named foo, you could access it as ./foo on Linux, .\foo on Windows, or simply foo on either one.

Windows tosses all of its files into C:\System, but then, it's a single user, local operating system. Unix and Linux are multi-user network operating systems - they segregate files into several different directories depending on purpose. /etc holds all configuration files not necessary for boot and basic system configuration. Since booting and configuring the system are unique concerns relative to normal operation, it's natural to think of all of the non-unique config data as "et cetera". But you're free to think otherwise.

When you launch vim in a modern version of Linux, you get an editing window. Those little icons at the top are buttons for the typical editing features that you would find in (say) Windows Notepad - from left to right on this computer, they are Open File, Save Current File, Save All Files, Print (separator), Undo, Redo (separator), Cut, Copy, Paste (separator), Find/Replace, Find Next, Find Previous (separator), Choose Session, Save Session, Run Script (separator), Make, Build Tags, Jump to Tags (separator), and Help. This doesn't strike me as more difficult - or indeed, much different - than Notepad (ignoring script and make support, of course - but surely you can just ignore those?).

Now, if you want to run vim within a text window, it's a bit more complicated - but ed in Windows isn't exactly a paragon of user friendliness, either! :-D

All of this is rather academic, though, since vim is not the default editor in a modern Gnome-based Linux system - gedit is.

But look, I really don't care if you want to hate Unix based on a comparison of whatever you use today compared to what you tried decades ago. Feel free. But doesn't that seem a little unfair? Just a thought.

No, not really. Sometimes, to access a file in the same directory I'm in, I have to do './filename' (or is it /.filename? I can't remember).

What do you mean by "access"? If you're going to read from or write to a file in the current working directory, no path needs to be specified. If you want to execute a file (a binary or a script) from the current working directory which is not part of $PATH (the search path for programs to execute), you need to prefix it with the ./ "here" path. This makes sure you don't run the "ls" binary some hacker left in your home directory by accident. :-)

The dot infront of a filename means it's a "hidden file". Files starting with . are not listed by ls by default, you need to use ls -a to see it. To be more generic, ".* is not part of *".

Some of the most important files in the system are in a directory named etc. Do you know what 'etc' means?

It seems you should read a little bit about UNIX history. In the older versions, /etc (having the meaning "et cetera") did not only contain files for system configuration as it does today, but it did also contain "additional binaries" such as /etc/mount or /etc/fsck.

Today, some people interpret /etc as "editable text configuration" which is what it is commonly used today on UNIX and Linux: Text files for configuration of the system, its services and additional software.

The default text editor for crontab on the systems I have to use is still vi, which is one of the most user-UNfriendly pieces of shit ever written.

Again, you should read some history. Agreed that vi is not intended to be a "word processing typewriter", it's a powerful editor. Again my statement about language applies: You have to know how to operate it in order to make full use of its power.

Some UNIX and Linux systems have a different standard editor (even though they often ship vi in the basic distribution). $EDITOR or $VISUAL can be used to configure what editor should be invoked for programs that will open a file for editing (e. g. chsh).

Hard drives are named 'hda' in the file system.

Yes, and what's the problem?

The problem is that you are wrong. On some Linux systems, the 1st hard drive is named /dev/hda, yes. But some Linusi use sda. BSD uses ad0, da0 or0 ada0 for the same disk, depending on how it is attached to the system (and which driver grants access to it). OpenBSD did use /dev/wd0. On other UNIXes, it's just /dev/hd0. On Solaris, it's even "more complicated" like /dev/dsk/c0t0d0s0. Since disks usually use labels, there even is no need to deal with this "bare metal" kind of files. Just ignore what you don't need.

I suppose many Unix gurus would argue that the pain of learning such an ass-backwards an incomprehensible system such as Unix is a rite of passage for enjoying its power.

Not quite. Learning basic independently from their actual implementation is what makes a UNIX guru that powerful. He can use any system even though they are different. He has successfully learned how to deal with new situation, not being tied to strange concepts of how to do things. This flexibility is the result of learning. I know many people fear learning because it consumes their time, and they feel much more comfortable in their "just works" world. Until it stops working. Then they are helpless, having a black box that just doesn't work. And of course, they cannot diagnose problems, create workarounds or create something new. They can only consume what others have left for them. A UNIX guru can always create functionality he needs, and this is what he can do with the most limited tools. Because, you know, in worst case, when nothing works, you'll be happy for all that "ass-backward" stuff because it brings your data back, your system up and your company back into production.

And I also understand that a lot of its eccentricities can be understood if you ever learn what a developer was thinking back in 1970-ish when all of this was being put together.

UNIX has always been about development. It started its life as a development system, not as a consumer platform. This heritage is even present in modern systems that come with development tools, compilers and debuggers, with means of looking inside the back box. Free UNIX-derived systems even come with source code of all their parts. All parts of today's modern technology that consumers take for granted is somehow related to the beginnings. Do you know why the Internet works today? Because there's a lot of "old-fashioned" UNIX stuff (systems, concepts and philosophy) that keeps it running. This is not a short-term consideration as you typically see them in (home) consumer products.

I'm just saying that in 2012, we should be able to do better than this.

What exactly do you mean? What are you missing in particular? If it's just about that you don't like it - just don't use it. It is that simple.

UNIX and Linux development has come a long way. But also recognize that much older stuff like mainframe technology or COBOL is still alive and kicking, especially in governmental use (where money doesn't matter). Could they do better? Sure! But why risk to break something that has proven to work? :-)