There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.
If this question can be reworded to fit the rules in the help center, please edit the question.

3

wow this question would take so long to answer comprehensively... like several books. There are very few OS's (embedded space aside and windows being the notable exception) that aren't a Unix.
–
xenoterracide♦Aug 19 '10 at 0:57

6 Answers
6

A UNIX system consists of several parts, or layers as I'd like to call them.

To start a system, a program called the boot loader lives at the first sector of a hard disk partition. It is started by the system, and in turn it locates the Operating System kernel, and load it.

Layering

The Kernel. This is the central program which is started by the boot loader. It does the basic hardware interaction for the system (disk, memory, video, sound) and offers a virtual environment in which it can start programs. The kernel also ships all drivers which deal with all the little differences between hardware devices. To the outside world (the higher layers), each class of devices appear to behave exactly in the same consistent way - which in turn, the programs can build upon.

Background subsystems. There are just regular programs, which just stay out of your way. They handle things like remote login, provide a cental message bus, and do actions based on hardware/network events. For example, bluetooth discovery, wifi management, etc.. Any network services (file server, print server, web server) also live at this level. In UNIX systems, these are all just normal programs.

The command line tools. These are all little programs which can be started to do things like text editing, downloading files, or administrating the system. At this point, a UNIX system is fully usable for system adminstrators. In Windows, this layer doesn't really exist anymore.

The graphical user interface. These are also just programs, the only difference is they draw windows at the screen instead of writing text. This makes the system easier to use for regular users.

Any service or event will go from the bottom all up to the top.

Libraries - the common platform

Programs do a lot of common things like displaying a window, drawing stuff at the screen or downloading a file. These things are the same for multiple programs, hence that code are put in separate "library" files (.so files - meaning shared object). The library can be shared across all programs.

For every imaginable thing, there is a library. There is one for reading/writing PNG files. There is one for JPEG files, for reading XML, for encryption, for video playback, and so on.

On Linux, the common libraries for application developers are Qt and Gtk. These libraries use lower-level libraries internally for their specific needs, while exposing their functionality in a nice consistent and concise way for application developers to create applications even faster.

Libraries provide the application platform, on which programmers can build end user applications for an Operating System. The more high quality libraries a system provides, the fewer code a programmer has to write to make a beautiful program.

Some libraries can be used across different operating systems (for instance, Qt is), some are really specifically tied into one operating system. This will restrict your program to be able to run at that platform only.

Inter process communication

A third corner piece of an operating system, is the way programs can communicate with each other. These are Inter Process Communication (IPC) machanisms. These exist in several flavors, e.g. a piece of shared memory, or a small channel is set up between two programs to exchange data. There is also a central message bus on which each program can post a message, and receive a response. This is used for global communication, where it's unknown which program can respond.

From libraries to Operating Systems

With libraries, IPC and the kernel in place, programmers can build all kinds of applications for system services, user administration, configuration, administration, office work, entertainment, etc.. This forms the complete suite which novice users recognize as the "operating system".

In UNIX/Linux systems, all services are just programs. All system admin tools are just programs. They all do their job, and they can be chained together. I've summarized a lot of major programs at http://codingdomain.com/linux/sysadmin/

Distinguishable parts with Windows

UNIX is mainly a system of programs, files and restricted permissions. A lot of complexities are avoided, making it a powerful system while it looks like it has an easy job doing it.

In detail, these are principles which can be found across UNIX/Linux systems:

There are uniform ways to access information. ("Everything is just a file"). You can open a file, network socket, IPC channel, kernel parameters and block device as a file. Hence the appearance of the virtual filesystems in /dev, /sys and /proc. The only API you ever need is open, read and close.

The underlying system is transparent. Every program operates under the same rules. Unlike Windows, there is no artificial difference between a "console program", "gui program" or "background service". They are all just programs, that happen to do different things. They can also all be observed, analyzed and debugged in the same way.

Settings are readable, editable, and can be annotated with comments. They typically have an INI-style format, but may use a custom format for the needs of that application. Because they are just files, they can be copied to other systems, archived or being backuped with standard tools.

No large "do it all in once" applications. The mantra is "do one thing, do it well". Command line tools can be chained and together be powerful. Separate services (e.g. SMTP, IMAP and POP, and login) are separate subprograms, avoiding complex intertwined code and security issues. Complex desktop environments delegate hard work to individual programs.

fork(). New programs are started by an existing program cloning itself. The clone sets up everything (e.g. file handles), and optionally replaces itself with the new program code. This makes it really easy to apply the same security settings and restrictions to new programs, share memory or setup an IPC mechanism. The cost of starting a process is also very low.

The file system is one tree, in which other disk partitions and network shares can be mounted. There is again, an universal way of accessing data. Common system locations (e.g. /usr can easily be mounted as network share.

The system is built for low user privileges. After login, every user (except root) is confined their own resources, running applications and files only. Network services reduce their privileges as soon as possible. There is a single clear way to get more privileges, or ask someone to execute a privileged job on their behalf. Every other call is limited by the restrictions and limitations of the program.

Every program stores settings in a hidden file/folder of the user home directory. No program ever attempts to write a global setting file.

A favor towards openly described communication mechanisms over secret mechanisms or specific 1-to-1 mechanisms. Other vendors and software developers are encouraged to follow the same specification, so things can easily be connected, swapped out and yet stay loosely coupled.

Thank you all for the comments, and votes! Great to know the answer is appreciated this well!
–
vdboorFeb 28 '11 at 13:07

1

@faif, it is quite standard ( even Microsoft operating systems have it ), and beauty is in the eye of the beholder I suppose. The point is that everything is a file, even special ones.
–
psusiAug 25 '11 at 13:27

In the spirit of the previous two book recommendations I would also recommend

The LINUX Programming Interface by M. Kerrisk

which, albeit targeting the topic of UNIX/Linux system programming, reveals tons of detailed information about how Linux and more generally UNIX systems work from the programmer/user's perspective. It delves in great detail into most of the bullets mentioned in vdboor's answer and reveals enough detail in an understandable and readable manner to get a feel / picture of the fundamental
UNIX concepts and their underpinnings.

There are some excellent answers here. However, one thing I think has been left out is how *nix differs from other operating systems, particularly Microsoft Windows.

The fundamental concept already covered above "do one thing, do it well" is so central to *nix operating systems that it can sometimes be overlooked. Yet it is this design philosophy that makes Linux so flexible and powerful.

For instance, the Graphics User Interface (GUI) for MS Windows is intertwined in the OS. It is virtually impossible to install an MS operating system without the GUI. In Linux, you can easily bring up a server or embedded system that has no graphic component at all. It can be entirely command line driven and still be a full featured server.

The modular design of Linux also allows a system administrator to bring down a service, upgrade it and bring it back up without rebooting the operating system. In fact about the only time you must reboot a Linux operating system is when the kernel itself is being modified or upgraded.

For example, you could install a new windows manager (gnome, kde, whichever) on Linux and a user currently logged in to the system might never be aware.

On Windows, often the simplest changes to the system require a reboot, although sometimes this is more of a safety issue than an actual technical requirement. I would submit that this is one of the basic flaws of the MS operating systems. On Linux you could upgrade many of the driver modules and have little or not impact on the users. On Windows you might be required to reboot the entire box if you simply install a new application.

This modular design also gives Linux extraordinary flexibility. Each Linux system can be tailored for the specific task you need to accomplish, with as little resource overhead as possible. With Windows you cannot turn off the GUI interface to run a simple HTTP server. There is a memory footprint that Windows assumes which creates a barrier below which your hardware cannot go. This is a primary reason that Linux has become the OS of choice for many mobile and embedded applications.

I could go on and on, but I hope these examples help to explain why Linux has become so popular, and how it really differs from that other OS.

I would recommend reading Advanced Programming in a Unix Environment 2e to learn a lot about the Single Unix Standard (SUS) API and POSIX, which will give you an idea about what makes Unix Unix and how the components work, and work together.

However, it's a very C heavy book and more of a reference manual. If you have a problem with insomnia just take it to bed with you. That aside if you are a Unix C programmer it's a must have.

UNIX is a strong OS, build on a sound design that has proven successful for more than 40 years (that's almost eternity in computer science). The central technology is based on the C language and a myriad of small programs: the UNIX commands. The basic philosophy has been summarized by McIlroy:

Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

More on the UNIX philosophy can be found in E.S.Raymond "The Art of UNIX Programming".

+1 for "The Art of UNIX Programming". However while API is defined around C there is no technical problem with implementing whole system in Haskell (with bits of assembly ;) ) or something like that.
–
Maciej PiechotkaAug 18 '10 at 14:40

3

The bits of assembly can be written in Haskell, too. Have a look at Potential
–
NovelocratAug 19 '10 at 2:11