As in other operating systems, the filesystem provides information storage and retrieval, and one of several forms of interprocess communication, in that the many small programs that traditionally form a Unix system can store information in files so that other programs can read them, although pipes complemented it in this role starting with the Third Edition. Also, the filesystem provides access to other resources through so-called device files that are entry points to terminals, printers, and mice.

The rest of this article uses Unix as a generic name to refer to both the original Unix operating system and its many workalikes.

Contents

The filesystem appears as one rooted tree of directories.[1] Instead of addressing separate volumes such as disk partitions, removable media, and network shares as separate trees (as done in DOS and Windows: each drive has a drive letter that denotes the root of its file system tree), such volumes can be mounted on a directory, causing the volume's file system tree to appear as that directory in the larger tree.[1] The root of the entire tree is denoted /.

In the original Bell Labs Unix, a two-disk setup was customary, where the first disk contained startup programs, while the second contained users' files and programs. This second disk was mounted at the empty directory named usr on the first disk, causing the two disks to appear as one filesystem, with the second's disks contents viewable at /usr.

Unix directories do not contain files. Instead, they contain the names of files paired with references to so-called inodes, which in turn contain both the file and its metadata (owner, permissions, time of last access, etc., but no name). Multiple names in the file system may refer to the same file, a feature termed a hard link.[1] The mathematical traits of hard links make the file system a limited type of directed acyclic graph, although the directories still form a tree, as they typically may not be hard-linked. (As originally envisioned in 1969, the Unix file system would in fact be used as a general graph with hard links to directories providing navigation, instead of path names.[2])

BSD also added symbolic links (often termed "symlinks") to the range of file types, which are files that refer to other files, and complement hard links.[3] Symlinks were modeled after a similar feature in Multics,[4] and differ from hard links in that they may span filesystems and that their existence is independent of the target object. Other Unix systems may support added types of files.[5]

Certain conventions exist for locating some kinds of files, such as programs, system configuration files, and users' home directories. These were first documented in the hier(7)man page since Version 7 Unix;[6] subsequent versions, derivatives and clones typically have a similar man page.[7][8][9][10][11]

Here is a generalized overview of common locations of files on a Unix operating system:

Directory or file

Description

/

The slash / character alone denotes the root of the filesystem tree.

/bin

Stands for binaries and contains certain fundamental utilities, such as ls or cp, that are needed to mount /usr, when that is a separate filesystem, or to run in one-user (administrative) mode when /usr cannot be mounted. In System V.4, this is a symlink to /usr/bin.

Contains system-wide configuration files and system databases; the name stands for et cetera.[13] Originally also contained "dangerous maintenance utilities" such as init,[14] but these have typically been moved to /sbin or elsewhere.

/home

Contains user home directories on Linux and some other systems. In the original version of Unix, home directories were in /usr instead.[15] Some systems use or have used different locations still: macOS has home directories in /Users, older versions of BSD put them in /u, FreeBSD has /usr/home.

/lib

Originally essential libraries: C libraries, but not Fortran ones.[13] On modern systems, it contains the shared libraries needed by programs in /bin, and possibly loadable kernel module or device drivers. Linux distributions may have variants /lib32 and /lib64 for multi-architecture support.

/media

Default mount point for removable devices, such as USB sticks, media players, etc.

/mnt

Stands for mount. Empty directory commonly used by system administrators as a temporary mount point.

/opt

Contains locally installed software. Originated in System V, which has a package manager that installs software to this directory (one subdirectory per package).[16]

The home directory for the superuserroot - that is, the system administrator. This account's home directory is usually on the initial filesystem, and hence not in /home (which may be a mount point for another filesystem) in case specific maintenance needs to be performed, during which other filesystems are not available. Such a case could occur, for example, if a hard disk drive suffers physical failures and cannot be properly mounted.

/sbin

Stands for "system (or superuser) binaries" and contains fundamental utilities, such as init, usually needed to start, maintain and recover the system.

/srv

Server data (data for services provided by system).

/sys

In some Linux distributions, contains a sysfs virtual filesystem, containing information related to hardware and the operating system. On BSD systems, commonly a symlink to the kernel sources in /usr/src/sys.

/tmp

A place for temporary files not expected to survive a reboot. Many systems clear this directory upon startup or use tmpfs to implement it.

The "user file system": originally the directory holding user home directories,[15] but already by the Third Edition of Research Unix, ca. 1973, reused to split the operating system's programs over two disks (one of them a 256K fixed-head drive) so that basic commands would either appear in /bin or /usr/bin.[17] It now holds executables, libraries, and shared resources that are not system critical, like the X Window System, KDE, Perl, etc. In older Unix systems, user home directories might still appear in /usr alongside directories containing programs, although by 1984 this depended on local customs.[13]

/include

Stores the development headers used throughout the system. Header files are mostly used by the #include directive in C language, which historically is how the name of this directory was chosen.

/lib

Stores the needed libraries and data files for programs stored within /usr or elsewhere.

/libexec

Holds programs meant to be executed by other programs rather than by users directly. E.g., the Sendmail executable may be found in this directory.[18] Not present in the FHS until 2011;[19] Linux distributions have traditionally moved the contents of this directory into /usr/lib, where they also resided in 4.3BSD.

/local

Resembles /usr in structure, but its subdirectories are used for additions not part of the operating system distribution, such as custom programs or files from a BSDPorts collection. Usually has subdirectories such as /usr/local/lib or /usr/local/bin.

/share

Architecture-independent program data. On Linux and modern BSD derivatives, this directory has subdirectories such as man for manpages, that used to appear directly under /usr in older versions.

/var

Stands for variable. A place for files that may change often - especially in size, for example e-mail sent to users on the system, or process-ID lock files.

/log

Contains system log files.

/mail

The place where all incoming mails are stored. Users (other than root) can access their own mail only. Often, this directory is a symbolic link to /var/spool/mail.

1.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers

2.
Printer (computing)
–
In computing, a printer is a peripheral which makes a persistent human-readable representation of graphics or text on paper or similar physical media. The worlds first computer printer was a 19th-century mechanically driven apparatus invented by Charles Babbage for his difference engine, the plotter was used for those requiring high quality line art like blueprints. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, by 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed, expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as laser printer in terms of flexibility, inkjet systems rapidly displaced dot matrix and daisy wheel printers from the market. By the 2000s high-quality printers of this sort had fallen under the price point. Even the desire for printed output for offline reading while on mass transit or aircraft has been displaced by e-book readers, today, traditional printers are being used more for special purposes, like printing photographs or artwork, and are no longer a must-have peripheral. Starting around 2010, 3D printing became an area of intense interest and these devices are in their earliest stages of development and have not yet become commonplace. Personal printers are designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute, however, this is offset by the on-demand convenience. Some printers can print documents stored on cards or from digital cameras. Networked or shared printers are designed for high-volume, high-speed printing and they are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm, a virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. It is called a printer by analogy with a printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper. The choice of print technology has an effect on the cost of the printer and cost of operation, speed, quality and permanence of documents. Some printer technologies dont work with certain types of physical media, cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink, banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly. The following printing technologies are found in modern printers, A laser printer rapidly produces high quality text

3.
Computer mouse
–
A computer mouse is a pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of a pointer on a display, physically, a mouse consists of an object held in ones hand, with one or more buttons. Mice often also other elements, such as touch surfaces and wheels. The earliest known publication of the mouse as referring to a computer pointing device is in Bill Englishs July 1965 publication. The plural for the rodent is always mice in modern usage. The plural of a mouse is mouses and mice according to most dictionaries. The first recorded usage is mice, the online Oxford Dictionaries cites a 1984 use. The trackball, a pointing device, was invented in 1944 by Ralph Benjamin as part of a World War II-era fire-control radar plotting system called Comprehensive Display System. Benjamin was then working for the British Royal Navy Scientific Service, Benjamins project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a ball for this purpose. The device was patented in 1947, but only a prototype using a ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret. Another early trackball was built by British electrical engineer Kenyon Taylor in collaboration with Tom Cranston, Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navys DATAR system in 1952. DATAR was similar in concept to Benjamins display, the trackball used four disks to pick up motion, two each for the X and Y directions. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, by counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball and it was not patented, since it was a secret military project. Douglas Engelbart of the Stanford Research Institute has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, Engelbart was also recognized as such in various obituary titles after his death in July 2013. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to X-Y coordinate input

4.
UNIX System V
–
UNIX System V is one of the first commercial versions of the Unix operating system. It was originally developed by AT&T and first released in 1983, four major versions of System V were released, numbered 1,2,3, and 4. It was the source of common commercial Unix features. System V is sometimes abbreviated to SysV, as of 2012, the Unix market is divided between three System V variants, IBMs AIX, Hewlett-Packards HP-UX and Oracles Solaris. System V was the successor to 1982s UNIX System III, while AT&T sold their own hardware that ran System V, most customers instead ran a version from a reseller, based on AT&Ts reference implementation. A standards document called the System V Interface Definition outlined the default features, in the 1980s and early-1990s, System V was considered one of the two major versions of UNIX, the other being the Berkeley Software Distribution. Historically, BSD was also commonly called BSD Unix or Berkeley Unix, the dispute had several levels, some technical and some cultural. The divide was roughly between longhairs and shorthairs, programmers and technical people tended to line up with Berkeley and BSD, more business-oriented types with AT&T and System V. While HP, IBM and others chose System V as the basis for their Unix offerings, other such as Sun Microsystems. Throughout its development, though, System V was infused with features from BSD, since the early 1990s, due to standardization efforts such as POSIX and the commercial success of Linux, the division between System V and BSD has become less important. System V, known inside Bell Labs as Unix 5.0, there was never an external release of Unix 4.0, which would have been System IV. This first release of System V was developed by AT&Ts UNIX Support Group, System V also included features such as the vi editor and curses from 4.1 BSD, developed at the University of California, Berkeley, it also improved performance by adding buffer and inode caches. It also added support for communication using messages, semaphores. SVR1 ran on DEC PDP-11 and VAX minicomputers, System V Release 2 was released in April,1984. It added shell functions and the SVID, new kernel features included record and file locking, demand paging, and copy on write. The concept of the base was formalized, and the DEC VAX-11/780 was chosen for this release. The porting base is the original version of a release. Educational source licenses for SVR2 were offered by AT&T for US$800 for the first CPU, a commercial source license was offered for $43,000, with three months of support, and a $16,000 price per additional CPU

5.
Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs, libraries and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine. The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would then have then transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, however, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape. The machine can move the back and forth, changing its contents as it performs an algorithm

6.
Linux Foundation
–
In recent years, the Linux Foundation has expanded its services through events, training and certification and Collaborative Projects. The Linux Foundation promotes, protects, and standardizes Linux by providing a set of services to compete effectively with closed platforms. The origin of The Linux Foundation can be traced back to 2000 when the Open Source Development Labs was founded. In 2003, Linus Torvalds, the creator of the freely available Linux operating system announced he would join the organization as an OSDL Fellow to work full-time on future versions of Linux. In 2007, OSDL merged with the Free Standards Group, another organization promoting the adoption of Linux, at the time, Jim Zemlin, who headed FSG, took over as executive director of The Linux Foundation. Major parts including OpenPrinting were still offline on October 20,2011, the restoration was complete on January 4,2012. The Linux Foundation serves as a spokesperson for Linux and generates original content that advances the understanding of the Linux platform. It also fosters innovation by hosting collaboration events among the Linux technical community, application developers, industry, through the Linux Foundations community programs, end users, developers, and industry members collaborate on technical, legal, and promotional issues. In order for Linux Kernel creator Linus Torvalds and other key developers to remain independent. The Linux Foundation offers application developers standardization services and support that makes Linux an attractive target for their development efforts and these include, the Linux Standard Base and the Linux Developer Network. The Linux Foundation also provides services to key areas of the Linux community, including an open source developer travel fund, through its workgroups, members and developers can collaborate on key technical areas. There is also a program that is vendor-neutral, technically advanced. On March 3,2009, the Linux Foundation announced that they would take over management of Linux. com from its previous owners, SourceForge and it also includes a directory of Linux software and hardware. Much like Linux itself, Linux. com plans to rely on the community to create and drive the content, the Linux Foundation hosts a Linux video forum where users, developers and vendors can create and share Linux video tutorials. It also includes videos from recent Linux Foundation events, as well as other industry forums and it is the home for the annual Linux Foundation Video Contest. The Linux Foundation plans to add commissioned series of Linux video tutorials on Linux. com in the months ahead, the Linux Developer Network is an online community for Linux application developers and independent software vendors who want to start or continue to develop applications for the Linux platform. The Linux Developer Networks goal is to empower developers to target the Linux platform, one of the ways the Linux Developer Network helps developers accomplish this is to help them build portable Linux applications. The Linux Developer Network also gives developers tools to create the best Linux apps possible, the Linux Foundation Training Program features instructors and content straight from the leaders of the Linux developer community

7.
Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with prior versions, the names of earlier versions of the language through FORTRAN77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90, the official language standards now refer to the language as Fortran rather than all-caps FORTRAN. In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a practical alternative to assembly language for programming their IBM704 mainframe computer. Backus historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, a developed by J. Halcombe Laning and demonstrated in the Laning. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954, the first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. John Backus said during a 1979 interview with Think, the IBM employee magazine, the language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM709,650,1620, significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures, the arithmetic IF statement was similar to a three-way branch instruction on the IBM704. However, the 704 branch instructions all contained only one destination address, an optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare. Also the Compare considered −0 and +0 to be different values while the Transfer Zero, the FREQUENCY statement in FORTRAN was used originally to give branch probabilities for the three branch cases of the arithmetic IF statement. The Monte Carlo technique is documented in Backus et al, many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in other programming languages. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and that code could be looked up by the programmer in a error messages table in the operators manual, providing them with a brief description of the problem. Before the development of disk files, text editors and terminals, programs were most often entered on a keyboard onto 80-column punched cards

8.
Computer terminal
–
A computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. The function of a terminal is confined to display and input of data, a terminal that depends on the host computer for its processing power is called a dumb terminal or thin client. A personal computer can run terminal emulator software that replicates the function of a terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system. The terminal of the first working programmable, fully automatic digital Turing-complete computer, the Z3, had a keyboard, early user terminals connected to computers were electromechanical teleprinters/teletypewriters, such as the Teletype Model 33 ASR, originally used for telegraphy or the Friden Flexowriter. Later printing terminals such as the DECwriter LA30 were developed, however printing terminals were limited by the speed at which paper could be printed, and for interactive use the paper record was unnecessary. The problem was that the amount of memory needed to store the information on a page of text was comparable to the memory in low end minicomputers then in use. Displaying the information at video speeds was also a challenge and the control logic took up a rack worth of pre-integrated circuit electronics. Another approach involved the use of the tube, a specialized CRT developed by Tektronix that retained information written on it without the need to refresh. The Datapoint 3300 from Computer Terminal Corporation was announced in 1967 and shipped in 1969 and it solved the memory space issue mentioned above by using a digital shift-register design, and using only 72 columns rather than the later more common choice of 80. Some type of blinking cursor that can be positioned, the term intelligent in this context dates from 1969. Notable examples include the IBM2250 and IBM2260, predecessors to the IBM3270, providing even more processing possibilities, workstations like the Televideo TS-800 could run CP/M-86, blurring the distinction between terminal and Personal Computer. Most terminals were connected to minicomputers or mainframe computers and often had a green or amber screen, typically terminals communicate with the computer via a serial port via a null modem cable, often using an EIA RS-232 or RS-422 or RS-423 or a current loop serial interface. In fact, the design for the Intel 8008 was originally conceived at Computer Terminal Corporation as the processor for the Datapoint 2200. While early IBM PCs had single color green screens, these screens were not terminals. The screen of a PC did not contain any character generation hardware, all signals and video formatting were generated by the video display card in the PC, or by the CPU. An IBM PC monitor, whether it was the monochrome display or the 16-color display, was technically much more similar to an analog TV set than to a terminal. With suitable software a PC could, however, emulate a terminal, the Data General One could be booted into terminal emulator mode from its ROM. Since the advent and subsequent popularization of the computer, few genuine hardware terminals are used to interface with computers today

9.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing

10.
MacOS
–
Within the market of desktop, laptop and home computers, and by web usage, it is the second most widely used desktop OS after Microsoft Windows. Launched in 2001 as Mac OS X, the series is the latest in the family of Macintosh operating systems, Mac OS X succeeded classic Mac OS, which was introduced in 1984, and the final release of which was Mac OS9 in 1999. An initial, early version of the system, Mac OS X Server 1.0, was released in 1999, the first desktop version, Mac OS X10.0, followed in March 2001. In 2012, Apple rebranded Mac OS X to OS X. Releases were code named after big cats from the release up until OS X10.8 Mountain Lion. Beginning in 2013 with OS X10.9 Mavericks, releases have been named after landmarks in California, in 2016, Apple rebranded OS X to macOS, adopting the nomenclature that it uses for their other operating systems, iOS, watchOS, and tvOS. The latest version of macOS is macOS10.12 Sierra, macOS is based on technologies developed at NeXT between 1985 and 1997, when Apple acquired the company. The X in Mac OS X and OS X is pronounced ten, macOS shares its Unix-based core, named Darwin, and many of its frameworks with iOS, tvOS and watchOS. A heavily modified version of Mac OS X10.4 Tiger was used for the first-generation Apple TV, Apple also used to have a separate line of releases of Mac OS X designed for servers. Beginning with Mac OS X10.7 Lion, the functions were made available as a separate package on the Mac App Store. Releases of Mac OS X from 1999 to 2005 can run only on the PowerPC-based Macs from the time period, Mac OS X10.5 Leopard was released as a Universal binary, meaning the installer disc supported both Intel and PowerPC processors. In 2009, Apple released Mac OS X10.6 Snow Leopard, in 2011, Apple released Mac OS X10.7 Lion, which no longer supported 32-bit Intel processors and also did not include Rosetta. All versions of the system released since then run exclusively on 64-bit Intel CPUs, the heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, and then launched in 1989 and its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. This led Apple to purchase NeXT in 1996, allowing NeXTSTEP, then called OPENSTEP, previous Macintosh operating systems were named using Arabic numerals, e. g. Mac OS8 and Mac OS9. The letter X in Mac OS Xs name refers to the number 10 and it is therefore correctly pronounced ten /ˈtɛn/ in this context. However, a common mispronunciation is X /ˈɛks/, consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API, the consumer version of Mac OS X was launched in 2001 with Mac OS X10.0. Reviews were variable, with praise for its sophisticated, glossy Aqua interface

A topological ordering of a directed acyclic graph: every edge goes from earlier in the ordering (upper left) to later in the ordering (lower right). A directed graph is acyclic if and only if it has a topological ordering.

A Hasse diagram representing the partial order of set inclusion (⊆) among the subsets of a three-element set.