Device files usually provide simple interfaces to standard devices (such as printers and serial ports), but can also be used to access specific unique resources on those devices, such as disk partitions. Additionally, device files are useful for accessing system resources that have no connection with any actual device such as data sinks and random number generators.

There are two general kinds of device files in Unix-like operating systems, known as character special files and block special files, the difference between them lies in how much data is read and written by the operating system and hardware. These together can be called device special files in contrast to named pipes, which are not connected to a device but are not ordinary files either.

MS-DOS borrowed the concept of special files from Unix but renamed them devices.[1] Because early versions of MS-DOS did not support a directory hierarchy, devices were distinguished from regular files by making their names reserved words, chosen for a degree of compatibility with CP/M.

In some Unix-like systems, most device files are managed as part of a virtual file system traditionally mounted at /dev, possibly associated with a controlling daemon, which monitors hardware addition and removal at run time, making corresponding changes to the device file system if that's not automatically done by the kernel, and possibly invoking scripts in system or user space to handle special device needs. The FreeBSD and DragonFly BSD implementations have named the virtual device file system devfs and the associated daemondevd. Linux primarily uses a user space implementation known as udev, but there are many variants. Darwin, and operating systems such as macOS based on it, have a purely kernel-based device file system.

In Unix systems which support chroot process isolation, such as Solaris Containers, typically each chroot environment needs its own /dev; these mount points will be visible on the host OS at various nodes in the global file system tree. By restricting the device nodes populated into chroot instances of /dev, hardware isolation can be enforced by the chroot environment (a program can not meddle with hardware it can neither see nor name—an even stronger form of access control than Unix file system permissions).

MS-DOS managed hardware device contention (see TSR) by making each device file exclusive open. An application attempting to access a device already in use would discover itself unable to open the device file node. A variety of device driver semantics are implemented in Unix and Linux concerning concurrent access.[2]

A simplified structure of the Linux kernel. File systems are implemented as part of the I/O subsystem.

Device nodes correspond to resources that an operating system's kernel has already allocated. Unix identifies those resources by a major number and a minor number,[3] both stored as part of the structure of a node. The assignment of these numbers occurs uniquely in different operating systems and on different computer platforms. Generally, the major number identifies the device driver and the minor number identifies a particular device (possibly out of many) that the driver controls:[4] in this case, the system may pass the minor number to a driver. However, in the presence of dynamic number allocation, this may not be the case (e.g. on FreeBSD 5 and up).

As with other special file types, the computer system accesses device nodes using standard system calls and treats them like regular computer files. Two standard types of device files exist; unfortunately their names are, for historical reasons, rather counter-intuitive, and explanations of the difference between the two are often incorrect as a result.

Character special files or character devices provide unbuffered, direct access to the hardware device. They do not necessarily allow programs to read or write single characters at a time; that is up to the device in question. The character device for a hard disk, for example, will normally require that all reads and writes are aligned to block boundaries and most certainly will not allow reading a single byte.

Character devices are sometimes known as raw devices to avoid the confusion surrounding the fact that a character device for a piece of block-based hardware will typically require programs to read and write aligned blocks.

Block special files or block devices provide buffered access to hardware devices, and provide some abstraction from their specifics.[5] Unlike character devices, block devices will always allow the programmer to read or write a block of any size (including single characters/bytes) and any alignment, the downside is that because block devices are buffered, the programmer does not know how long it will take before written data is passed from the kernel's buffers to the actual device, or indeed in what order two separate writes will arrive at the physical device; additionally, if the same hardware exposes both character and block devices, there is a risk of data corruption due to clients using the character device being unaware of changes made in the buffers of the block device.

Most systems create both block and character devices to represent hardware like hard disks. FreeBSD and Linux notably do not; the former has removed support for block devices,[6] while the latter creates only block devices. In Linux, to get a character device for a disk one must use the "raw" driver, though one can get the same effect as opening a character device by opening the block device with the Linux-specific O_DIRECT flag.

Device nodes on Unix-like systems do not necessarily have to correspond to physical devices. Nodes that lack this correspondence form the group of pseudo-devices, they provide various functions handled by the operating system. Some of the most commonly used (character-based) pseudo-devices include:

/dev/null – accepts and discards all input; produces no output (always returns an end-of-file indication on a read)

Nodes are created by the mknodsystem call, the command-line program for creating nodes is also called mknod. Nodes can be moved or deleted by the usual filesystem system calls (rename, unlink) and commands (mv, rm). When passed the option -R or -a while copying a device node, the cp -l command creates a new device node with the same attributes as the original.

Some Unix versions include a script named makedev or MAKEDEV to create all necessary devices in the directory /dev. It only makes sense on systems whose devices are statically assigned major numbers (e.g. by means of hardcoding it in their kernel module).

The canonical list of the prefixes used in Linux can be found in the Linux Device List, the official registry of allocated device numbers and /dev directory nodes for the Linux operating system.[7]

For most devices, this prefix is followed by a number uniquely identifying the particular device, for hard drives, a letter is used to identify devices and is followed by a number to identify partitions. Thus a file system may "know" an area on a disk as /dev/sda3, for example, or "see" a networked terminal session as associated with /dev/pts/14.

On disks using the typical PC master boot record, the device numbers of primary and the optional extended partition are numbered 1 through 4, while the indexes of any logical partitions are 5 and onwards, regardless of the layout of the former partitions (their parent extended partition does not need to be the fourth partition on the disk, nor do all four primary partitions have to exist).

Device names are usually not portable between different Unix-like system variants, for example, on some BSD systems, the IDE devices are named /dev/wd0, /dev/wd1, etc.

devfs is a specific implementation of a device file system on Unix-like operating systems, used for presenting device files. The underlying mechanism of implementation may vary, depending on the OS.

Maintaining these special files on a physically implemented file system (i.e. harddrive) is inconvenient, and as it needs kernel assistance anyway, the idea arose of a special-purpose logical file system that is not physically stored.

Also defining when devices are ready to appear is not entirely trivial, the 'devfs' approach is for the device driver to request creation and deletion of 'devfs' entries related to the devices it enables and disables.

DeviceFS was started in 1991[9] and first appeared in RISC OS 3. It manages several device like special files, most commonly: Parallel, Serial, FastParallel, and USB, the SystemDevices module implements the pseudo devices such as: Vdu, Kbd, Null and Printer.

As implemented in the kernel, character devices appear in the virtual \DEV directory and any disk directory. Under MS-DOS/PC DOS 2.x, the CONFIG.SYSAVAILDEV=FALSE directive can be used to force devices to exist only in \DEV.

MS-DOS borrowed the concept of special files from Unix but renamed them devices.[1] Because early versions of MS-DOS did not support a directory hierarchy, devices were distinguished from regular files by making their names reserved words, this means that certain file names were reserved for devices, and should not be used to name new files or directories.[12] The reserved names themselves were chosen to be compatible with "special files" handling of PIPcommand in CP/M. There were two kinds of devices in MS-DOS: Block Devices (used for disk drives) and Character Devices (generally all other devices, including COM and PRN devices).[13] PIPE, MAILSLOT, and MUP are other standard Windows devices.[14]

A device file is a reserved keyword used in DOS, TOS, OS/2, and Microsoft Windows systems to allow access to certain ports and devices.

DOS uses device files for accessing printers and ports. Most versions of Windows also contain this support, which can cause confusion when trying to make files and folders of certain names, as they cannot have these names.[15] Versions 2.x of MS-DOS provide the AVAILDEVCONFIG.SYS parameter that, if set to FALSE, makes these special names only active if prefixed with \DEV\, thus allowing ordinary files to be created with these names.[16]

GEMDOS, the DOS-like part of Atari TOS, supported similar device names to DOS, but unlike DOS it required a trailing ":" character (on DOS, this is optional) to identify them as devices as opposed to normal filenames (thus "CON:" would work on both DOS and TOS, but "CON" would name an ordinary file on TOS but the console device on DOS). In MiNT and MagiC, a special UNIX-like unified filesystem view accessed via the "U:" drive letter also placed device files in "U:\DEV".

The 8-bit operating system of Sharppocket computers like the PC-E500, PC-E500S etc. consists of a BASIC interpreter, a DOS 2-like File Control System (FCS) implementing a rudimentary 12-bit FAT-like filesystem, and a BIOS-like Input Output Control System (IOCS) implementing a number of standard character and block device drivers as well as special file devices including STDO:/SCRN: (display), STDI:/KYBD: (keyboard), COM: (serial I/O), STDL:/PRN: (printer), CAS: (cassette tape), E:/F:/G: (memory file), S1:/S2:/S3: (memory card), X:/Y: (floppy), SYSTM: (system), and NIL: (function).[22]

^Corbet, Jonathan; Kroah-Hartman, Greg; Rubini, Alessandro (2005). Linux Device Drivers, 3rd Edition. O'Reilly. Retrieved 28 April 2017. The next step beyond a single-open device is to let a single user open a device in multiple processes but allow only one user to have the device open at a time.

1.
Device file
–
In Unix-like operating systems, a device file or special file is an interface for a device driver that appears in a file system as if it were an ordinary file. There are also special files in MS-DOS, OS/2, and Microsoft Windows and they allow software to interact with a device driver using standard input/output system calls, which simplifies many tasks and unifies user-space I/O mechanisms. Device files often provide simple interfaces to peripheral devices such as printers and serial ports, finally, device files are useful for accessing system resources that have no connection with any actual device such as data sinks and random number generators. MS-DOS borrowed the concept of files from Unix but renamed them devices. Because early versions of MS-DOS did not support a directory hierarchy and this means that certain file names were reserved for devices, and should not be used to name new files or directories. The reserved names themselves were chosen to be compatible with special handling of PIP command in CP/M. There were two kinds of devices in MS-DOS, Block Devices and Character Devices, PIPE, MAILSLOT, and MUP are other standard Windows devices. There are two kinds of device files in Unix-like operating systems, known as character special files. The difference between them lies in how data written to them and read them is processed by the operating system. These together can be called device files in contrast to named pipes. Device nodes correspond to resources that an operating systems kernel has already allocated, Unix identifies those resources by a major number and a minor number, both stored as part of the structure of a node. The assignment of numbers occurs uniquely in different operating systems. Generally, the major number identifies the driver and the minor number identifies a particular device that the driver controls, in this case. However, in the presence of number allocation, this may not be the case. As with other file types, the computer system accesses device nodes using standard system calls. Character special files or character devices provide unbuffered, direct access to the hardware device and they do not necessarily allow programs to read or write single characters at a time, that is up to the device in question. The character device for a disk, for example, will normally require that all reads and writes are aligned to block boundaries. Block special files or block devices provide buffered access to hardware devices, unlike character devices, block devices will always allow the programmer to read or write a block of any size and any alignment

2.
Unix-like
–
A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell, there is no standard for defining the term, and some difference of opinion is possible as to the degree to which a given operating system or application is Unix-like. The Open Group owns the UNIX trademark and administers the Single UNIX Specification and they do not approve of the construction Unix-like, and consider it a misuse of their trademark. Other parties frequently treat Unix as a genericized trademark, in 2007, Wayne R. Gray sued to dispute the status of UNIX as a trademark, but lost his case, and lost again on appeal, with the court upholding the trademark and its ownership. Unix-like systems started to appear in the late 1970s and early 1980s, many proprietary versions, such as Idris, UNOS, Coherent, and UniFlex, aimed to provide businesses with the functionality available to academic users of UNIX. These largely displaced the proprietary clones, growing incompatibility among these systems led to the creation of interoperability standards, including POSIX and the Single UNIX Specification. Various free, low-cost, and unrestricted substitutes for UNIX emerged in the 1980s and 1990s, including 4. 4BSD, Linux, some of these have in turn been the basis for commercial Unix-like systems, such as BSD/OS and OS X. The various BSD variants are notable in that they are in fact descendants of UNIX, however, the BSD code base has evolved since then, replacing all of the AT&T code. Since the BSD variants are not certified as compliant with the Single UNIX Specification, dennis Ritchie, one of the original creators of Unix, expressed his opinion that Unix-like systems such as Linux are de facto Unix systems. Eric S. Raymond and Rob Landley have suggested there are three kinds of Unix-like systems, Genetic UNIX Those systems with a historical connection to the AT&T codebase. Most commercial UNIX systems fall into this category, so do the BSD systems, which are descendants of work done at the University of California, Berkeley in the late 1970s and early 1980s. Some of these systems have no original AT&T code but can trace their ancestry to AT&T designs. Trademark or branded UNIX These systems‍—‌largely commercial in nature‍—‌have been determined by the Open Group to meet the Single UNIX Specification and are allowed to carry the UNIX name, many ancient UNIX systems no longer meet this definition. Around 2001, Linux was given the opportunity to get a certification including free help from the POSIX chair Andrew Josey for the price of one dollar. Some non-Unix-like operating systems provide a Unix-like compatibility layer, with degrees of Unix-like functionality. IBM z/OSs UNIX System Services is sufficiently complete to be certified as trademark UNIX, cygwin and MSYS both provide a GNU environment on top of the Microsoft Windows user API, sufficient for most common open source software to be compiled and run. Subsystem for Unix-based Applications provides Unix-like functionality as a Windows NT subsystem, Windows Subsystem for Linux provides a Linux-compatible kernel interface developed by Microsoft and containing no Linux code, with Ubuntu user-mode binaries running on top of it

3.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing

4.
Device driver
–
In computing, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer. A driver communicates with the device through the bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device, once the device sends data back to the driver, the driver may invoke routines in the original calling program. Drivers are hardware dependent and operating-system-specific and they usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. The main purpose of device drivers is to provide abstraction by acting as a translator between a device and the applications or operating systems that use it. Programmers can write the higher-level application code independently of specific hardware the end-user is using. For example, an application for interacting with a serial port may simply have two functions for send data and receive data. At a lower level, a device driver implementing these functions would communicate to the serial port controller installed on a users computer. Writing a device driver requires an understanding of how the hardware. In contrast, most user-level software on modern operating systems can be stopped without greatly affecting the rest of the system, even drivers executing in user mode can crash a system if the device is erroneously programmed. These factors make it difficult and dangerous to diagnose problems. The task of writing drivers thus usually falls to software engineers or computer engineers who work for hardware-development companies and this is because they have better information than most outsiders about the design of their hardware. Moreover, it was considered in the hardware manufacturers interest to guarantee that their clients can use their hardware in an optimum way. Typically, the Logical Device Driver is written by the operating system vendor, but in recent years non-vendors have written numerous device drivers, mainly for use with free and open source operating systems. In such cases, it is important that the manufacturer provides information on how the device communicates. Although this information can instead be learned by reverse engineering, this is more difficult with hardware than it is with software. Microsoft has attempted to reduce system instability due to poorly written device drivers by creating a new framework for driver development, if such drivers malfunction, they do not cause system instability. Apple has a framework for developing drivers on Mac OS X called the I/O Kit

5.
File system
–
In computing, a file system or filesystem is used to control how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops, by separating the data into pieces and giving each piece a name, the information is easily isolated and identified. Taking its name from the way paper-based information systems are named, the structure and logic rules used to manage the groups of information and their names is called a file system. There are many different kinds of file systems, each one has different structure and logic, properties of speed, flexibility, security, size and more. Some file systems have been designed to be used for specific applications, for example, the ISO9660 file system is designed specifically for optical discs. File systems can be used on different types of storage devices that use different kinds of media. The most common device in use today is a hard disk drive. Other kinds of media that are used include flash memory, magnetic tapes, in some cases, such as with tmpfs, the computers main memory is used to create a temporary file system for short-term use. Some file systems are used on local storage devices, others provide file access via a network protocol. Some file systems are virtual, meaning that the files are computed on request or are merely a mapping into a different file system used as a backing store. The file system access to both the content of files and the metadata about those files. It is responsible for arranging storage space, reliability, efficiency, before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning, by 1964 it was in general use. A file system consists of two or three layers, sometimes the layers are explicitly separated, and sometimes the functions are combined. The logical file system is responsible for interaction with the user application and it provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing. The logical file system manage open file table entries and per-process file descriptors and this layer provides file access, directory operations, security and protection. The second optional layer is the file system. This interface allows support for multiple concurrent instances of physical file systems, the third layer is the physical file system

6.
Computer file
–
A computer file is a computer resource for recording data discretely in a computer storage device. Just as words can be written to paper, so can information be written to a computer file, there are different types of computer files, designed for different purposes. A file may be designed to store a picture, a message, a video. Some types of files can store different several types of information at once, by using computer programs, a person can open, read, change, and close a computer file. Computer files may be reopened, modified, and copied a number of times. Typically, computer files are organised in a system, which keeps track of where the files are. The word file derives from the Latin filum, such a file now exists in a memory tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones - speeds intelligent solutions through mazes of mathematics, in 1952, file denoted, inter alia, information stored on punched cards. In early use, the hardware, rather than the contents stored on it, was denominated a file. For example, the IBM350 disk drives were denominated disk files, although the contemporary register file demonstrates the early concept of files, its use has greatly decreased. On most modern operating systems, files are organized into one-dimensional arrays of bytes, for example, the bytes of a plain text file are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to some basic information about itself. Some file systems can store arbitrary file-specific data outside of the file format, on other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are susceptible to loss of metadata than are container. At any instant in time, a file might have a size, normally expressed as number of bytes, in most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a storage device. In such systems, software employed other methods to track the exact byte count, the general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file, these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents

7.
MS-DOS
–
MS-DOS is a discontinued operating system for x86-based personal computers mostly developed by Microsoft. MS-DOS resulted from a request in 1981 by IBM for a system to use in its IBM PC range of personal computers. Microsoft quickly bought the rights to 86-DOS from Seattle Computer Products, IBM licensed and released it in August 1981 as PC DOS1.0 for use in their PCs. During its life, several competing products were released for the x86 platform and it was also the underlying basic operating system on which early versions of Windows ran as a GUI. It is a operating system, and consumes negligible installation space. MS-DOS was a form of 86-DOS – owned by Seattle Computer Products. This first version was shipped in August 1980, Microsoft, which needed an operating system for the IBM Personal Computer hired Tim Paterson in May 1981 and bought 86-DOS1.10 for $75,000 in July of the same year. Microsoft kept the number, but renamed it MS-DOS. They also licensed MS-DOS1. 10/1.14 to IBM, within a year Microsoft licensed MS-DOS to over 70 other companies. It was designed to be an OS that could run on any 8086-family computer, thus, there were many different versions of MS-DOS for different hardware, and there is a major distinction between an IBM-compatible machine and an MS-DOS machine. This design would have worked well for compatibility, if application programs had only used MS-DOS services to perform device I/O, Microsoft omitted multi-user support from MS-DOS because Microsofts Unix-based operating system, Xenix, was fully multi-user. After the breakup of the Bell System, however, AT&T Computer Systems started selling UNIX System V, believing that it could not compete with AT&T in the Unix market, Microsoft abandoned Xenix, and in 1987 transferred ownership of Xenix to the Santa Cruz Operation. On 25 March 2014, Microsoft made the code to SCP MS-DOS1.25, as an April Fools joke in 2015, Microsoft Mobile launched a Windows Phone application called MS-DOS Mobile which was presented as a new mobile operating system and worked similar to MS-DOS. Version 3.1 – Support for Microsoft Networks Version 3.2 – First version to support 3.5 inch,720 kB floppy drives and diskettes. Version 3.21 Version 3.22 – Version 3.25 Version 3.3 – First version to support 3.5 inch,1.44 MB floppy drives and diskettes, Version 3. 3a Version 3.31 – supports FAT16B and larger drives. MS-DOS4.0 and MS-DOS4.1 – A separate branch of development with additional multitasking features and it is unrelated to any later versions, including versions 4.00 and 4.01 listed below MS-DOS4. x – includes a graphical/mouse interface. It had many bugs and compatibility issues. Version 4.00 – First version to support a hard disk partition that is greater than 32 MiB. Version 4.01 – Microsoft rewritten Version 4.00 released under MS-DOS label, First version to introduce volume serial number when formatting hard disks and floppy disks

8.
OS/2
–
OS/2 is a series of computer operating systems, initially created by Microsoft and IBM, then later developed by IBM exclusively. The name stands for Operating System/2, because it was introduced as part of the generation change release as IBMs Personal System/2 line of second-generation personal computers. The first version of OS/2 was released in December 1987 and newer versions were released until December 2001, OS/2 was intended as a protected mode successor of PC DOS. Because of this heritage, OS/2 shares similarities with Unix, Xenix, IBM discontinued its support for OS/2 on 31 December 2006. Since then, it has been updated, maintained and marketed under the name eComStation, in 2015 it was announced that a new OEM distribution of OS/2 would be released that was to be called ArcaOS. The development of OS/2 began when IBM and Microsoft signed the Joint Development Agreement in August 1985 and it was code-named CP/DOS and it took two years for the first product to be delivered. OS/21.0 was announced in April 1987 and released in December, the original release is textmode-only, and a GUI was introduced with OS/21.1 about a year later. OS/2 features an API for controlling the display and handling keyboard. In addition, development tools include a subset of the video, a task-switcher named Program Selector is available through the Ctrl-Esc hotkey combination, allowing the user to select among multitasked text-mode sessions. Communications and database-oriented extensions were delivered in 1988, as part of OS/21.0 Extended Edition, SNA, X. 25/APPC/LU6.2, LAN Manager, Query Manager, SQL. The promised graphical user interface, Presentation Manager, was introduced with OS/21.1 in October,1988 and it had a similar user interface to Windows 2.1, which was released in May of that year. The Extended Edition of 1.1, sold only through IBM sales channels, introduced distributed database support to IBM database systems, in 1989, Version 1.2 introduced Installable Filesystems and notably the HPFS filesystem. HPFS provided a number of improvements over the older FAT file system, including long filenames, in addition, extended attributes were also added to the FAT file system. The Extended Edition of 1.2 introduced TCP/IP and Ethernet support, OS/2 and Windows-related books of the late 1980s acknowledged the existence of both systems and promoted OS/2 as the system for the future. The collaboration between IBM and Microsoft unravelled in 1990, between the releases of Windows 3.0 and OS/21.3, during this time, Windows 3.0 became a tremendous success, selling millions of copies in its first year. Much of its success was because Windows 3.0 was bundled with most new computers, OS/2, on the other hand, was available only as an expensive stand-alone software package. In addition, OS/2 lacked device drivers for many devices such as printers. Windows, on the hand, supported a much larger variety of hardware

9.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system

10.
System call
–
In computing, a system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on. This may include hardware-related services, creation and execution of new processes, system calls provide an essential interface between a process and the operating system. In most systems, system calls can only be made from userspace processes, while in some systems, OS/360 and successors for example, the architecture of most modern processors, with the exception of some embedded systems, involves a security model. The operating system executes at the highest level of privilege, and allows applications to request services via system calls, generally, systems provide a library or API that sits between normal programs and the operating system. The librarys wrapper functions expose an ordinary function calling convention for using the system call, in this way the library, which exists between the OS and the application, increases portability. The call to the function itself does not cause a switch to kernel mode and is usually a normal subroutine call. The actual system call does transfer control to the kernel, for example, in Unix-like systems, fork and execve are C library functions that in turn execute instructions that invoke the fork and exec system calls. On exokernel based systems, the library is especially important as an intermediary, on exokernels, libraries shield user applications from the very low level kernel API, and provide abstractions and resource management. IBM operating systems descended from OS/360 and DOS/360, including z/OS and z/VSE and this reflects their origin at a time when programming in assembly language was more common than high-level language usage. IBM system calls are not directly executable by high-level language programs. On Unix, Unix-like and other POSIX-compliant operating systems, popular system calls are open, read, write, close, wait, exec, fork, exit, many modern operating systems have hundreds of system calls. For example, Linux and OpenBSD each have over 300 different calls, NetBSD has close to 500, FreeBSD has over 500, Windows 7 has close to 700, while Plan 9 has 51. This special ability of the program is also implemented with a system call. Implementing system calls requires a transfer from user space to kernel space. A typical way to implement this is to use a software interrupt or trap, interrupts transfer control to the operating system kernel so software simply needs to set up some register with the system call number needed, and execute the software interrupt. This is the only provided for many RISC processors. For example, the x86 instruction set contains the instructions SYSCALL/SYSRET and these are fast control transfer instructions that are designed to quickly transfer control to the kernel for a system call without the overhead of an interrupt. Linux 2.5 began using this on the x86, where available, formerly it used the INT instruction, an older x86 mechanism is the call gate

11.
Disk partitioning
–
Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or other secondary storage, so that an operating system can manage information in each region separately. Partitioning is typically the first step of preparing a newly manufactured disk, the disk stores the information about the partitions locations and sizes in an area known as the partition table that the operating system reads before any other part of the disk. Each partition then appears in the system as a distinct logical disk that uses part of the actual disk. Partitioning a drive is when you divide the total storage of a drive into different pieces, once a partition is created, it can then be formatted so that it can be used on a computer. Creating more than one partition has the advantages, Separation of the operating system. This allows image backups to be made of only the operating system, having a separate area for operating system virtual memory swapping/paging. Keeping frequently used programs and data near each other, having cache and log files separate from other files. These can change size dynamically and rapidly, potentially making a file system full, use of multi-boot setups, which allow users to have more than one operating system on a single computer. Protecting or isolating files, to make it easier to recover a corrupted file system or operating system installation, If one partition is corrupted, other file systems may not be affected. Raising overall computer performance on systems where smaller file systems are more efficient, short stroking, which aims to minimize performance-eating head repositioning delays by reducing the number of tracks used per HDD. The basic idea is that you make one partition approx, 20–25% of the total size of the drive. This partition is expected to, occupy the outer tracks of the HDD, If you limit capacity with short stroking, the minimum throughput stays much closer to the maximum. This technique, however, is not related to creating multiple partitions, for example, a 1 TB disk may have an access time of 12 ms at 200 IOPS with an average throughput of 100 MB/s. When it is partitioned to 100 GB access time may be decreased to 6 ms at 300 IOPS with a throughput of 200 MB/s. Partitioning for significantly less than the size available when disk space is not needed can reduce the time for diagnostic tools such as checkdisk to run or for full image backups to run. It also prevents disk optimizers from moving all frequently accessed files closer to other on the disk. Files can still be moved closer to other on each partition. This issue does not apply to Solid-state drives as access times on those are neither affected by nor dependent upon relative sector positions, may prevent using the whole disk capacity, because it may break free capacities apart

12.
Directory (computing)
–
In computing, a directory is a file system cataloging structure which contains references to other computer files, and possibly other directories. On many computers, directories are known as folders, or drawers to provide some relevancy to a workbench or the traditional office file cabinet, files are organized by storing related files in the same directory. In a hierarchical filesystem, a directory contained inside another directory is called a subdirectory, the terms parent and child are often used to describe the relationship between a subdirectory and the directory in which it is cataloged, the latter being the parent. The top-most directory in such a filesystem, which not have a parent of its own, is called the root directory. In modern systems, a directory can contain a mix of files and subdirectories, a reference to a location in a directory system is called a path. In many operating systems, programs have an associated working directory in which they execute, typically, file names accessed by the program are assumed to reside within this directory if the file names are not specified with an explicit directory name. Some operating systems restrict a users access to only their home directory or project directory, in early versions of Unix the root directory was the home directory of the root user, but modern Unix usually uses another directory such as /root for this purpose. In keeping with Unix philosophy, Unix systems treat directories as a type of file, folders are often depicted with icons which visually resemble physical file folders. There is a difference between a directory, which is a file system concept, and the user interface metaphor that is used to represent it. Many operating systems also have the concept of smart folders that reflect the results of a file system search or other operation and these folders do not represent a directory in the file hierarchy. Many email clients allow the creation of folders to organize email and these folders have no corresponding representation in the filesystem structure. If one is referring to a container of documents, the folder is more appropriate. The term directory refers to the way a structured list of document files, operating systems that support hierarchical filesystems implement a form of caching to RAM of recent path lookups. In the Unix world, this is usually called Directory Name Lookup Cache, for local filesystems, DNLC entries normally expire only under pressure from other more recent entries. For network file systems a coherence mechanism is necessary to ensure that entries have not been invalidated by other clients, definition of directory by The Linux Information Project

13.
CP/M
–
Initially confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory, later versions of CP/M added multi-user variations and were migrated to 16-bit processors. The combination of CP/M and S-100 bus computers was loosely patterned on the MITS Altair and this computer platform was widely used in business through the late 1970s and into the mid-1980s. CP/M increased the size for both hardware and software by greatly reducing the amount of programming required to install an application on a new manufacturers computer. An important driver of innovation was the advent of low-cost microcomputers running CP/M, as independent programmers and hackers bought them. CP/M was displaced by MS-DOS soon after the 1981 introduction of the IBM PC, manufacturers of CP/M-compatible systems customized portions of the operating system for their own combination of installed memory, disk drives, and console devices. CP/M would also run on based on the Zilog Z80 processor since the Z80 was compatible with 8080 code. CP/M used the 7-bit ASCII set, the other 128 characters made possible by the 8-bit byte were not standardized. For example, one Kaypro used them for Greek characters, WordStar used the 8th bit as an end-of-word marker. The BIOS and BDOS were memory-resident, while the CCP was memory-resident unless overwritten by an application, a number of transient commands for standard utilities were also provided. The transient commands resided in files with the extension. COM on disk, the BIOS directly controlled hardware components other than the CPU and main memory. It contained functions such as input and output and the reading and writing of disk sectors. The BDOS implemented the CP/M file system and some input/output abstractions on top of the BIOS, the CCP took user commands and either executed them directly or loaded and started an executable file of the given name. Third-party applications for CP/M were also essentially transient commands, the BDOS, CCP and standard transient commands were the same in all installations of a particular revision of CP/M, but the BIOS portion was always adapted to the particular hardware. Adding memory to a computer, for example, meant that the CP/M system had to be reinstalled with an updated BIOS capable of addressing the additional memory, a utility was provided to patch the supplied BIOS, BDOS and CCP to allow them to be run from higher memory. Once installed, the system was stored in reserved areas at the beginning of any disk which would be used to boot the system. On start-up, the bootloader would load the system from the disk in drive A. By modern standards CP/M was primitive, owing to the constraints on program size. With version 1.0 there was no provision for detecting a changed disk, if a user changed disks without manually rereading the disk directory the system would write on the new disk using the old disks directory information, ruining the data stored on the disk

14.
Virtual file system
–
A virtual file system or virtual filesystem switch is an abstraction layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local, a VFS specifies an interface between the kernel and a concrete file system. Therefore, it is easy to add support for new file types to the kernel simply by fulfilling the contract. One of the first virtual file system mechanisms on Unix-like systems was introduced by Sun Microsystems in SunOS2.0 in 1985 and it allowed Unix system calls to access local UFS file systems and remote NFS file systems transparently. For this reason, Unix vendors who licensed the NFS code from Sun often copied the design of Suns VFS.1, the SunOS implementation was the basis of the VFS mechanism in System V Release 4. John Heidemann developed a stacking VFS under SunOS4.0 for the experimental Ficus file system and this design provided for code reuse among file system types with differing but similar semantics. Heidemann adapted this work for use in 4. 4BSD as a part of his thesis research, other Unix virtual file systems include the File System Switch in System V Release 3, the Generic File System in Ultrix, and the VFS in Linux. In OS/2 and Microsoft Windows, the file system mechanism is called the Installable File System. The Filesystem in Userspace mechanism allows userland code to plug into the file system mechanism in Linux, NetBSD, FreeBSD, OpenSolaris. Sometimes Virtual File System refers to a file or a group of files that acts as a container which should provide the functionality of a concrete file system through the usage of software. Examples of such containers are SolFS or a virtual file system in an emulator like PCTask or so-called WinUAE, Oracles VirtualBox, Microsofts Virtual PC. The primary benefit for this type of system is that it is centralized. Another major drawback is that performance is low when compared to other virtual file systems. Low performance is due to the cost of shuffling virtual files when data is written or deleted from the virtual file system. Direct examples of virtual file systems include emulators, such as PCTask and WinUAE. This makes it easy to treat an OS installation like any piece of software—transferring it with removable media or over the network. The Amiga emulator PCTask emulated an Intel PC8088 based machine clocked at 4. 77MHz, Users of PCTask could create a file of large size on the Amiga filesystem, and this file would be virtually accessed from the emulator as if it were a real PC Hard Disk

15.
FreeBSD
–
FreeBSD is a free and open source Unix-like operating system descended from Research Unix via the Berkeley Software Distribution. Although for legal reasons FreeBSD cannot use the Unix trademark, it is a descendant of BSD. FreeBSD has similarities with Linux, with two differences in scope and licensing, FreeBSD maintains a complete operating system, i. e. The FreeBSD project includes a security team overseeing all software shipped in the base distribution, a wide range of additional third-party applications may be installed using the pkgng package management system or the FreeBSD Ports, or by directly compiling source code. FreeBSDs roots go back to the University of California, Berkeley, the university acquired a UNIX source license from AT&T. The BSD project was founded in 1976 by Bill Joy, but since BSD contained code from AT&T Unix, all recipients had to get a license from AT&T first in order to use BSD. In June 1989, Networking Release 1 or simply Net-1 – the first public version of BSD – was released, after releasing Net-1, Keith Bostic, a developer of BSD, suggested replacing all AT&T code with freely-redistributable code under the original BSD license. Work on replacing AT&T code began and, after 18 months, however, six files containing AT&T code remained in the kernel. The BSD developers decided to release the Networking Release 2 without those six files and they released 386BSD via an anonymous FTP server. The first version of FreeBSD was released on November 1993, in the early days of the projects inception, a company named Walnut Creek CDROM, upon the suggestion of the two FreeBSD developers, agreed to release the operating system on CD-ROM. By 1997, FreeBSD was Walnut Creeks most successful product, the company itself later renamed to The FreeBSD Mall and later iXSystems. Today, FreeBSD is used by many IT companies such as IBM, Nokia, Juniper Networks, certain parts of Apples Mac OS X operating system are based on FreeBSD. The PlayStation 3 operating system also borrows certain components from FreeBSD, netflix, WhatsApp, and FlightAware are also examples of big, successful and heavily network-oriented companies which are running FreeBSD. 386BSD and FreeBSD were both derived from 1992s BSD release, in January 1992, BSDi started to release BSD/386, later called BSD/OS, an operating system similar to FreeBSD and based on 1992s BSD release. AT&T filed a lawsuit against BSDi and alleged distribution of AT&T source code in violation of license agreements, the lawsuit was settled out of court and the exact terms were not all disclosed. The only one that became public was that BSDi would migrate their source base to the newer 4. 4BSD-Lite sources, Although not involved in the litigation, it was suggested to FreeBSD that they should also move to 4. 4BSD-Lite. FreeBSD2.0, which was released on November 1994, was the first version of FreeBSD without any code from AT&T, Desktop Although FreeBSD does not install the X Window System by default, it is available in the FreeBSD ports collection. A number of Desktop environments such as GNOME, KDE and Xfce, embedded systems Although it explicitly focuses on the IA-32 and x86-64 platforms, FreeBSD also supports others such as ARM, PowerPC and MIPS to a lesser degree

16.
DragonFly BSD
–
DragonFly BSD is a free and open source Unix-like operating system created as a fork of FreeBSD4.8. He sought to correct these problems within the FreeBSD project. Due to ongoing conflicts with other FreeBSD developers over the implementation of his ideas, despite this, the DragonFly BSD and FreeBSD projects still work together contributing bug fixes, driver updates, and other system improvements to each other. Intended to be the continuation of the FreeBSD4. Many concepts planned for DragonFly were inspired by the AmigaOS operating system, the messaging subsystem being developed is similar to those found in microkernels such as Mach, though it is less complex by design. Additionally, the migration of select kernel code into userspace has the benefit of making the more robust, if a userspace driver crashes. System calls are being split into userland and kernel versions and being encapsulated into messages, Linux and other Unix-like OS compatibility code is being migrated out similarly. As support for multiple instruction set architectures complicates symmetric multiprocessing support, DragonFly originally ran on the x86 architecture, however as of version 4.0 it is no longer supported. Since version 1.10, DragonFly supports 1,1 userland threading, inherited from FreeBSD, DragonFly also supports multi-threading. In DragonFly, each CPU has its own thread scheduler, inter-processor thread scheduling is also accomplished by sending asynchronous IPI messages. The LWKT subsystem is being employed to work among multiple kernel threads. In order to run safely on multiprocessor machines, access to shared resources must be serialized so that threads or processes do not attempt to modify the same resource at the same time. In order to prevent multiple threads from accessing or modifying a shared resource simultaneously, DragonFly employs critical sections, while both Linux and FreeBSD5 employ fine-grained mutex models to achieve higher performance on multiprocessor systems, DragonFly does not. Until recently, DragonFly also employed spls, but these were replaced with critical sections, critical sections are used to protect against local interrupts, individually for each CPU, guaranteeing that a thread currently being executed will not be preempted. Serializing tokens are used to prevent concurrent accesses from other CPUs and may be held simultaneously by multiple threads, blocked or sleeping threads therefore do not prevent other threads from accessing the shared resource unlike a thread that is holding a mutex. The serializing token code is evolving into something similar to the Read-copy-update feature now available in Linux. Unlike Linuxs current RCU implementation, DragonFlys is being implemented such that only processors competing for the same token are affected rather than all processors in the computer, DragonFly switched to multiprocessor safe slab allocator, which requires neither mutexes nor blocking operations for memory assignment tasks. It was eventually ported into standard C library in the userland, since release 1.8 DragonFly has a virtualization mechanism similar to User-mode Linux, allowing a user to run another kernel in the userland

17.
Daemon (computing)
–
In multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the names of a daemon end with the letter d, for clarification that the process is, in fact, a daemon. For example, syslogd is the daemon that implements the system logging facility, in a Unix environment, the parent process of a daemon is often, but not always, the init process. In addition, a daemon launched by forking and exiting typically must perform other operations, such procedures are often implemented in various convenience routines such as daemon in Unix. Systems often start daemons at boot time which will respond to requests, hardware activity. Daemons such as cron may also perform defined tasks at scheduled times, the term was coined by the programmers of MITs Project MAC. They took the name from Maxwells demon, an imaginary being from an experiment that constantly works in the background. Maxwells Demon is consistent with Greek mythologys interpretation of a daemon as a supernatural being working in the background, however, BSD and some of its derivatives have adopted a Christian demon as their mascot rather than a Greek daemon. The word daemon is a spelling of demon, and is pronounced /ˈdiːmən/ DEE-mən. In the context of software, the original pronunciation /ˈdiːmən/ has drifted to /ˈdeɪmən/ DAY-mən for some speakers. Alternate terms for daemon are service, started task, and ghost job, after the term was adopted for computer use, it was rationalized as a backronym for Disk And Execution MONitor. Daemons which connect to a network are examples of network services. However, more commonly, a daemon may be any background process, executing as a background task by forking and exiting. This is required sometimes for the process to become a session leader and it also allows the parent process to continue its normal execution. Setting the root directory as the current working directory so that the process does not keep any directory in use that may be on a file system. Required files will be opened later, in the Microsoft DOS environment, daemon-like programs were implemented as terminate and stay resident software. On Microsoft Windows NT systems, programs called Windows services perform the functions of daemons and they run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. However, any Windows application can perform the role of a daemon, not just a service, on the classic Mac OS, optional features and services were provided by files loaded at startup time that patched the operating system, these were known as system extensions and control panels

18.
Udev
–
Udev is a device manager for the Linux kernel. As the successor of devfsd and hotplug, udev primarily manages device nodes in the /dev directory. At the same time, udev also handles all user space events raised while hardware devices are added into the system or removed from it and it is an operating systems kernel that is responsible for providing an abstract interface of the hardware to the rest of the software. Being a monolithic kernel, the Linux kernel does exactly that, and device drivers are part of the Linux kernel, hardware can be accessed through system calls or over their device nodes. Running in user space serves security and stability purposes, device drivers are part of the Linux kernel, and device discovery, state changes, etc. are handled by the Linux kernel. But after loading the driver into memory, the action the kernel takes is to send out an event to a userspace daemon. It is the manager, udevd, that catches all of these events. For this, udevd has a comprehensive set of configuration files. In case a new device is connected over USB, udevd is notified by the kernel. That daemon could then mount the file systems, in case a new Ethernet cable is plugged into the Ethernet NIC, udevd is notified by the kernel and itself notifies the NetworkManager-daemon. The NetworkManager-daemon could start dhclient for that NIC, or configure according to some manual configuration, the complexity of doing so forces application authors to re-implement hardware support logic. Some hardware devices also require privileged helper programs to them for use. These must often be invoked in ways that can be awkward to express with the Unix permissions model, application authors resort to using setuid binaries or run service daemons to provide their own access control and privilege separation, potentially introducing security holes each time. HAL was created to deal with this, and udev replaced HAL, the default udev setup provides persistent names for storage devices. Any hard disk is recognized by its unique filesystem id, the name of the disk, udev executes entirely in user space, as opposed to devfss kernel space. The udev, as a whole, is divided into three parts, Library libudev that allows access to information, it was incorporated into the systemd 183 software bundle. User space daemon udevd that manages the virtual /dev, administrative command-line utility udevadm for diagnostics. The system gets calls from the kernel via netlink socket, earlier versions used hotplug, adding a link to themselves in /etc/hotplug. d/default with this purpose

19.
Darwin (operating system)
–
Darwin is an open-source Unix operating system released by Apple Inc. in 2000. It is composed of code developed by Apple, as well as derived from NeXTSTEP, BSD, Mach. Darwin forms the set of components upon which macOS, iOS, watchOS. It is mostly POSIX-compatible, but has never, by itself, starting with Leopard, macOS has been certified as compatible with the Single UNIX Specification version 3. The heritage of Darwin began with NeXTs NeXTSTEP operating system, first released in 1989, after Apple bought NeXT in 1997, it announced it would base its next operating system on OPENSTEP. This was developed into Rhapsody in 1997, Mac OS X Server 1.0 in 1999, Mac OS X Public Beta in 2000, and Mac OS X10.0 in 2001. Up to Darwin 8.0.1, Apple released a binary installer after each major Mac OS X release that one to install Darwin on PowerPC. Minor updates were released as packages that were installed separately, Darwin is now only available as source code, except for the ARM variant, which has not been released in any form separately from iOS. However, the versions of Darwin are still available in binary form. The kernel of Darwin is XNU, a kernel that combines a heavily modified version of the Mach 3.0 kernel, various elements of BSD. The hybrid kernel design leverages the flexibility of a microkernel and the performance of a monolithic kernel. An open-source port of the XNU kernel exists that supports Darwin on Intel and AMD x86 platforms not officially supported by Apple, an open-source port of the XNU kernel also exists for ARM platforms. Older versions supported some or all of 32-bit PowerPC, 64-bit PowerPC and it supports the POSIX API by way of its BSD lineage and a large number of programs written for various other UNIX-like systems can be compiled on Darwin with no changes to the source code. Darwin does not include many of the elements of macOS, such as the Carbon and Cocoa APIs or the Quartz Compositor and Aqua user interface. The following is a table of major Darwin releases with their dates of release, note that the corresponding macOS release may have been released on a different date, refer to the macOS pages for those dates. In the build numbering system of macOS, every version has a unique beginning build number, Mac OS X v10.0 had build numbers starting with 4,10.1 had build numbers starting with 5, and so forth. The command uname -r in Terminal will show the Darwin version number, and the command uname -v will show the XNU build version string, due to the free software nature of Darwin, there are many projects that aim to modify or enhance the operating system. OpenDarwin was an operating system based on the Darwin system

20.
MacOS
–
Within the market of desktop, laptop and home computers, and by web usage, it is the second most widely used desktop OS after Microsoft Windows. Launched in 2001 as Mac OS X, the series is the latest in the family of Macintosh operating systems, Mac OS X succeeded classic Mac OS, which was introduced in 1984, and the final release of which was Mac OS9 in 1999. An initial, early version of the system, Mac OS X Server 1.0, was released in 1999, the first desktop version, Mac OS X10.0, followed in March 2001. In 2012, Apple rebranded Mac OS X to OS X. Releases were code named after big cats from the release up until OS X10.8 Mountain Lion. Beginning in 2013 with OS X10.9 Mavericks, releases have been named after landmarks in California, in 2016, Apple rebranded OS X to macOS, adopting the nomenclature that it uses for their other operating systems, iOS, watchOS, and tvOS. The latest version of macOS is macOS10.12 Sierra, macOS is based on technologies developed at NeXT between 1985 and 1997, when Apple acquired the company. The X in Mac OS X and OS X is pronounced ten, macOS shares its Unix-based core, named Darwin, and many of its frameworks with iOS, tvOS and watchOS. A heavily modified version of Mac OS X10.4 Tiger was used for the first-generation Apple TV, Apple also used to have a separate line of releases of Mac OS X designed for servers. Beginning with Mac OS X10.7 Lion, the functions were made available as a separate package on the Mac App Store. Releases of Mac OS X from 1999 to 2005 can run only on the PowerPC-based Macs from the time period, Mac OS X10.5 Leopard was released as a Universal binary, meaning the installer disc supported both Intel and PowerPC processors. In 2009, Apple released Mac OS X10.6 Snow Leopard, in 2011, Apple released Mac OS X10.7 Lion, which no longer supported 32-bit Intel processors and also did not include Rosetta. All versions of the system released since then run exclusively on 64-bit Intel CPUs, the heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, and then launched in 1989 and its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. This led Apple to purchase NeXT in 1996, allowing NeXTSTEP, then called OPENSTEP, previous Macintosh operating systems were named using Arabic numerals, e. g. Mac OS8 and Mac OS9. The letter X in Mac OS Xs name refers to the number 10 and it is therefore correctly pronounced ten /ˈtɛn/ in this context. However, a common mispronunciation is X /ˈɛks/, consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API, the consumer version of Mac OS X was launched in 2001 with Mac OS X10.0. Reviews were variable, with praise for its sophisticated, glossy Aqua interface

21.
Access control
–
Security, access control is the selective restriction of access to a place or other resource. The act of accessing may mean consuming, entering, or using, permission to access a resource is called authorization. Locks and login credentials are two mechanisms of access control. Geographical access control may be enforced by personnel, or with a such as a turnstile. There may be fences to avoid circumventing this access control, an alternative of access control in the strict sense is a system of checking authorized presence, see e. g. Ticket controller. A variant is exit control, e. g. of a shop or a country, the term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human, through means such as locks and keys. Within these environments, physical key management may also be employed as a means of managing and monitoring access to mechanically keyed areas or access to certain small assets. Physical access control is a matter of who, where, an access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically, this was accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door, when a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed. Electronic access control uses computers to solve the limitations of mechanical locks, a wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented, when access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door locked and the attempted access is recorded. The system will monitor the door and alarm if the door is forced open or held open too long after being unlocked. When a credential is presented to a reader, the reader sends the information, usually a number, to a control panel. The control panel compares the number to an access control list, grants or denies the presented request

22.
Kernel (operating system)
–
The kernel is a computer program that is the core of a computers operating system, with complete control over everything in the system. It is the first program loaded on start-up and it handles the rest of start-up as well as input/output requests from software, translating them into data-processing instructions for the central processing unit. It handles memory and peripherals like keyboards, monitors, printers, the critical code of the kernel is usually loaded into a protected area of memory, which prevents it from being overwritten by applications or other, more minor parts of the operating system. The kernel performs its tasks, such as running processes and handling interrupts, in contrast, everything a user does is in user space, writing text in a text editor, running programs in a GUI, etc. This separation prevents user data and kernel data from interfering with other and causing instability. The kernels interface is an abstraction layer. When a process makes requests of the kernel, it is called a system call, Kernel designs differ in how they manage these system calls and resources. A monolithic kernel runs all the operating instructions in the same address space. A microkernel runs most processes in space, for modularity. The kernel takes responsibility for deciding at any time which of the running programs should be allocated to the processor or processors. Random-access memory Random-access memory is used to both program instructions and data. Typically, both need to be present in memory in order for a program to execute, often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, input/output devices I/O devices include such peripherals as keyboards, mice, disk drives, printers, network adapters, and display devices. The kernel allocates requests from applications to perform I/O to an appropriate device, key aspects necessary in resource management are the definition of an execution domain and the protection mechanism used to mediate the accesses to the resources within a domain. Kernels also usually provide methods for synchronization and communication between processes called inter-process communication, finally, a kernel must provide running programs with a method to make requests to access these facilities. The kernel has full access to the memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation, virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. This allows every program to behave as if it is the one running

23.
Node (computer science)
–
A node is a basic unit used in computer science. Nodes are devices or data points on a larger network, devices such as a personal computer, cell phone, or printer are nodes. When defining nodes on the internet, a node is anything that has an IP address, nodes are individual parts of a larger data structure, such as linked lists and tree data structures. Nodes contain data and also may link to other nodes, links between nodes are often implemented by pointers. Nodes are often arranged into tree structures, a node represents the information contained in a single structure. These nodes may contain a value or condition, or possibly serve as another independent data structure, nodes are represented by a single parent node. The highest point on a structure is called a root node, which does not have a parent node. The height of a node is determined by the longest path from node to the furthest leaf node. Node depth is determined by the distance between that node and the root node. The root node is said to have a depth of zero, data can be discovered along these network paths. An IP address uses this kind of system of nodes to define its location in a network, child, A child node is a node extending from another node. For example, a computer with internet access could be considered a child node of a representing the internet. The inverse relationship is that of a parent node, if node C is a child of node A, then A is the parent node of C. Degree, the degree of a node is the number of children of the node, depth, the depth of node A is the length of the path from A to the root node. The root node is said to have depth 0, height, the height of node A is the length of the longest path through children to a leaf node. Internal node, a node with at least one child, leaf node, a node with no children. Root node, a node distinguished from the rest of the tree nodes, usually, it is depicted as the highest node of the tree. Sibling nodes, these are connected to the same parent node

24.
Computing platform
–
Computing platform means in general sense, where any piece of software is executed. It may be the hardware or the system, even a web browser or other application. The term computing platform can refer to different abstraction levels, including a hardware architecture, an operating system. In total it can be said to be the stage on which programs can run. For example, an OS may be a platform that abstracts the underlying differences in hardware, platforms may also include, Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS, this is referred to as running on bare metal, a browser in the case of web-based software. The browser itself runs on a platform, but this is not relevant to software running within the browser. An application, such as a spreadsheet or word processor, which hosts software written in a scripting language. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform, software frameworks that provide ready-made functionality. Cloud computing and Platform as a Service, the social networking sites Twitter and facebook are also considered development platforms. A virtual machine such as the Java virtual machine, applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM. A virtualized version of a system, including virtualized hardware, OS, software. These allow, for instance, a typical Windows program to run on what is physically a Mac, some architectures have multiple layers, with each layer acting as a platform to the one above it. In general, a component only has to be adapted to the layer immediately beneath it, however, the JVM, the layer beneath the application, does have to be built separately for each OS

25.
Peripheral
–
A peripheral is an ancillary device used to put information into and get information out of the computer. Touchscreens are an example that combines different devices into a hardware component that can be used both as an input and output device. A peripheral device is defined as any auxiliary device such as a computer mouse or keyboard that connects to. Other examples of peripherals are image scanners, tape drives, microphones, loudspeakers, webcams, common input peripherals include keyboards, computer mice, graphic tablets, touchscreens, barcode readers, image scanners, microphones, webcams, game controllers, light pens, and digital cameras. Common output peripherals include computer displays, printers, projectors, computer hardware Controller Display device Expansion card Punched card input/output Punched tape Video game accessory

26.
Unlink (Unix)
–
In Unix-like operating systems, unlink is a system call and a command line utility to delete files. The program directly interfaces the system call, which removes the name and directories like rm. If the file name was the last hard link to the file and it also appears in the PHP, Node. js and Perl programming languages in the form of the unlink built-in function. Like the Unix utility, it is used to delete files

Parallel ATA (PATA), originally AT Attachment, is an interface standard for the connection of storage devices such as …

Example of a 1992 80386 PC motherboard with nothing built in other than memory, keyboard, processor, cache, realtime clock, and slots. Such basic motherboards could have been outfitted with either the ST-506 or ATA interface, but usually not both. A single 2-drive ATA interface and a floppy interface was added to this system via the 16-bit ISA card.

An Oak Technology Mozart 16 16-bit ISA sound card, from when the CDROM drive interface had not yet been standardized. This card offers four separate interface connectors for IDE, Panasonic, Mitsumi, and Sony CDROM drives, but only one connector could be used since they all shared the same interface wiring.

A SoundBlaster 32 16-bit ISA sound card, from after connector standardization had occurred, with an IDE interface for the CDROM drive.

HDD with disks and motor hub removed exposing copper colored stator coils surrounding a bearing in the center of the spindle motor. Orange stripe along the side of the arm is thin printed-circuit cable, spindle bearing is in the center and the actuator is in the upper left

Head stack with an actuator coil on the left and read/write heads on the right