CLI shells require the user to be familiar with commands and their calling syntax, and to understand concepts about the shell-specific scripting language (for example bash script). They are also more easily operated via refreshable braille display, and provide certain advantages to screen readers.

Graphical shells place a low burden on beginning computer users, and are characterized as being easy to use. Since they also come with certain disadvantages, most GUI-enabled operating systems also provide CLI shells.

Most operating system shells are not direct interfaces to the underlying kernel, even if a shell communicates with the user via peripheral devices attached to the computer directly. Shells are actually special applications that use the kernel API in just the same way as it is used by other application programs. A shell manages the user–system interaction by prompting users for input, interpreting their input, and then handling an output from the underlying operating system (much like a read–eval–print loop, REPL).[3] Since the operating system shell is actually an application, it may easily be replaced with another similar application, for most operating systems.

Most operating system shells fall into one of two categories – command-line and graphical. Command line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). Other possibilities, although not so common, include voice user interface and various implementations of a text-based user interface (TUI) that are not CLI. The relative merits of CLI- and GUI-based shells are often debated.

A command-line interface (CLI) is an operating system shell that uses alphanumeric characters typed on a keyboard to provide instructions and data to the operating system, interactively. For example, a teletypewriter can send codes representing keystrokes to a command interpreter program running on the computer; the command interpreter parses the sequence of keystrokes and responds with an error message if it cannot recognize the sequence of characters, or it may carry out some other program action such as loading an application program, listing files, logging in a user and many others. Operating systems such as UNIX have a large variety of shell programs with different commands, syntax and capabilities. Some operating systems had only a single style of command interface; commodity operating systems such as MS-DOS came with a standard command interface but third-party interfaces were also often available, providing additional features or functions such as menuing or remote program execution.

Application programs may also implement a command-line interface. For example, in Unix-like systems, the telnet program has a number of commands for controlling a link to a remote computer system. Since the commands to the program are made of the same keystrokes as the data being sent to a remote computer, some means of distinguishing the two are required. An escape sequence can be defined, using either a special local keystroke that is never passed on but always interpreted by the local system. The program becomes modal, switching between interpreting commands from the keyboard or passing keystrokes on as data to be processed.

A feature of many command-line shells is the ability to save sequences of commands for re-use. A data file can contain sequences of commands which the CLI can be made to follow as if typed in by a user. Special features in the CLI may apply when it is carrying out these stored instructions. Such batch files (script files) can be used repeatedly to automate routine operations such as initializing a set of programs when a system is restarted. Batch mode use of shells usually involves structures, conditionals, variables, and other elements of programming languages; some have the bare essentials needed for such a purpose, others are very sophisticated programming languages in and of themselves. Conversely, some programming languages can be used interactively from an operating system shell or in a purpose-built program.

The command-line shell may offer features such as command-line completion, where the interpreter expands commands based on a few characters input by the user. A command-line interpreter may offer a history function, so that the user can recall earlier commands issued to the system and repeat them, possibly with some editing. Since all commands to the operating system had to be typed by the user, short command names and compact systems for representing program options were common. Short names were sometimes hard for a user to recall, and early systems lacked the storage resources to provide a detailed on-line user instruction guide.

Graphical shells provide means for manipulating programs based on graphical user interface (GUI), by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. Graphical shells may be included with desktop environments or come separately, even as a set of loosely coupled utilities.

Most graphical user interfaces develop the metaphor of an "electronic desktop", where data files are represented as if they were paper documents on a desk, and application programs similarly have graphical representations instead of being invoked by command names.

Graphical shells typically build on top of a windowing system. In the case of X Window System or Wayland, the shell consists of an X window manager or a Wayland compositor, respectively, as well as of one or multiple programs providing the functionality to start installed applications, to manage open windows and virtual desktops, and often to support a widget engine.

Modern versions of the Microsoft Windows operating system use the Windows shell as their shell. Windows Shell provides the familiar desktop environment, start menu, and task bar, as well as a graphical user interface for accessing the file management functions of the operating system. Older versions also include Program Manager, which was the shell for the 3.x series of Microsoft Windows, and which in fact shipped with later versions of Windows of both the 95 and NT types at least through Windows XP. The interfaces of Windows versions 1 and 2 were markedly different.

Desktop applications are also considered shells, as long as they use a third-party engine. Likewise, many individuals and developers dissatisfied with the interface of Windows Explorer have developed software that either alters the functioning and appearance of the shell or replaces it entirely. WindowBlinds by StarDock is a good example of the former sort of application. LiteStep and Emerge Desktop are good examples of the latter.

Interoperability programmes and purpose-designed software lets Windows users use equivalents of many of the various Unix-based GUIs discussed below, as well as Macintosh. An equivalent of the OS/2 Presentation Manager for version 3.0 can run some OS/2 programmes under some conditions using the OS/2 environmental subsystem in versions of Windows NT.

"Shell" is also used loosely to describe application software that is "built around" a particular component, such as web browsers and email clients, in analogy to the shells found in nature. These are also sometimes referred to as "wrappers".[2]

In expert systems, a shell is a piece of software that is an "empty" expert system without the knowledge base for any particular application.[7]

^"The Internet's fifth man", Brain scan, The Economist, London: Economist Group, December 13, 2013, Mr Pouzin created a program called RUNCOM that helped users automate tedious and repetitive commands. That program, which he described as a “shell” around the computer’s whirring innards, gave inspiration—and a name—to an entire class of software tools, called command-line shells, that still lurk below the surface of modern operating systems.

1.
Text-based user interface
–
Text-based user interface, also called textual user interface or terminal user interface, is a retronym coined sometime after the invention of graphical user interfaces. TUIs display computer graphics in text mode, an advanced TUI may, like GUIs, use the entire screen area and accept mouse and other inputs. From text applications point of view, a screen can belong to one of three types, A genuine text mode display, controlled by a video adapter or the central processor itself. This is a condition for a locally running application on various types of personal computers. If not deterred by the system, a smart program may exploit the full power of a hardware text mode. Examples are xterm for X Window System and win32 console for Microsoft Windows and this usually supports programs which expect a real text mode display, but may run considerably slower. Certain functions of a text mode, such as an own font uploading. The communication capabilities usually become reduced to a line or its emulation, possibly with few ioctls as an out-of-band channel in such cases as Telnet. This is the worst case, because software restrictions hinder the use of capabilities of a display device. Under Linux and other Unix-like systems, a program easily accommodates to any of the three cases because the same interface controls the display and keyboard, also, specialized programming libraries help to output the text in a way appropriate to the given display device and interface to it. See below for a comparison to Windows, american National Standards Institute standard ANSI X3.64 defines a standard set of escape sequences that can be used to drive terminals to create TUIs. Escape sequences may be supported for all three mentioned in the above section, allowing random cursor movements and color changes. However, not all follow this standard, and many non-compatible. However, programmers soon learned that writing data directly to the buffer was far faster and simpler to program. This change in programming resulted in many DOS TUI programs. The win32 console environment is notorious for its emulation of certain EGA/VGA text mode features, particularly a random access to the text buffer, even if the application runs in a window. On the other hand, programs running under Windows have much control of the display and keyboard than Linux and DOS programs can have. Most often those used a blue background for the main screen, with white or yellow characters

2.
Man page
–
A man page is a form of software documentation usually found on a Unix or Unix-like operating system. Topics covered include computer programs, formal standards and conventions, a user may invoke a man page by issuing the man command. By default, man uses a terminal pager program such as more or less to display its output. To read a page for a Unix command, a user can type, Pages are traditionally referred to using the notation name, for example. The same page name may appear in more than one section of the manual, such as when the names of system calls, user commands, examples are man and man, or exit and exit. The syntax for accessing the non-default manual section varies between different man implementations, on Solaris and illumos, for example, the syntax for reading printf is, On Linux and BSD derivatives the same invocation would be, which searches for printf in section 3 of the man pages. In the first two years of the history of Unix, no documentation existed, the Unix Programmers Manual was first published on November 3,1971. The first actual man pages were written by Dennis Ritchie and Ken Thompson at the insistence of their manager Doug McIlroy in 1971. Aside from the man pages, the Programmers Manual also accumulated a set of papers, some of them tutorials. Later versions of the documentation imitated the first man pages terseness, Ritchie added a How to get started section to the Third Edition introduction, and Lorinda Cherry provided the Purple Card pocket reference for the Sixth and Seventh Editions. Versions of the software were named after the revision of the manual, for the Fourth Edition the man pages were formatted using the troff typesetting package and its set of -man macros. At the time, the availability of online documentation through the manual system was regarded as a great advance. The modern descendants of 4. 4BSD also distribute man pages as one of the forms of system documentation. Few alternatives to man have enjoyed popularity, with the possible exception of GNU Projects info system. In addition, some Unix GUI applications now provide end-user documentation in HTML, man pages are usually written in English, but translations into other languages may be available on the system. The default format of the man pages is troff, with either the macro package man or mdoc and this makes it possible to typeset a man page into PostScript, PDF, and various other formats for viewing or printing. Most Unix systems have a package for the command, which enables users to browse their man pages using an html browser. A consequence of this is that section 8 is sometimes relegated to the 1M subsection of the main commands section, Some subsection suffixes have a general meaning across sections, Some versions of man cache the formatted versions of the last several pages viewed

3.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software

4.
User interface
–
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. Examples of this concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. Generally, the goal of user interface design is to produce a user interface makes it easy, efficient. This generally means that the needs to provide minimal input to achieve the desired output. Other terms for user interface are man–machine interface and when the machine in question is a computer human–computer interface, the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the part of the Human Machine Interface which we can see. In complex systems, the interface is typically computerized. The term human–computer interface refers to this kind of system, in the context of computing the term typically extends as well to the software dedicated to control the physical elements used for human-computer interaction. The engineering of the interfaces is enhanced by considering ergonomics. The corresponding disciplines are human factors engineering and usability engineering, which is part of systems engineering, tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the graphical user interface for human–machine interface on computers. There is a difference between a user interface and an interface or a human–machine interface. A human-machine interface is typically local to one machine or piece of equipment, an operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled. The system may expose several user interfaces to serve different kinds of users, for example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel. The user interface of a system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI, in practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is commonly used for human–computer interaction

5.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing

6.
Command-line interface
–
A program which handles the interface is called a command language interpreter or shell. The interface is implemented with a command line shell, which is a program that accepts commands as text input. Command-line interfaces to computer operating systems are widely used by casual computer users. Alternatives to the line include, but are not limited to text user interface menus, keyboard shortcuts. Examples of this include the Windows versions 1,2,3,3.1, and 3.11, DosShell, and Mouse Systems PowerPanel. Command-line interfaces are preferred by more advanced computer users, as they often provide a more concise. Programs with command-line interfaces are generally easier to automate via scripting, a program that implements such a text interface is often called a command-line interpreter, command processor or shell. Under most operating systems, it is possible to replace the shell program with alternatives, examples include 4DOS for DOS, 4OS2 for OS/2. For example, the default Windows GUI is a program named EXPLORER. EXE. These programs are shells, but not CLIs, application programs may also have command line interfaces. When a program is launched from an OS command line shell, interactive command line sessions, After launch, a program may provide an operator with an independent means to enter commands in the form of text. OS inter-process communication, Most operating systems support means of inter-process communication, Command lines from client processes may be redirected to a CLI program by one of these methods. Some applications support only a CLI, presenting a CLI prompt to the user, Some examples of CLI-only applications are, DEBUG Diskpart Ed Edlin Fdisk Ping Some computer programs support both a CLI and a GUI. In some cases, a GUI is simply a wrapper around a separate CLI executable file, in other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality, for example, all features of MATLAB, a numerical analysis computer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features. The early Sierra games, like the first three Kings Quest games, used commands from a command line to move the character around in the graphic window. Early computer systems often used teleprinter machines as the means of interaction with a human operator, the computer became one end of the human-to-human teleprinter model. So instead of a human communicating with another human over a teleprinter, in time, the actual mechanical teleprinter was replaced by a glass tty, and then by a smart terminal

7.
Graphical user interface
–
GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard. The actions in a GUI are usually performed through direct manipulation of the graphical elements, beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. Designing the visual composition and temporal behavior of a GUI is an important part of application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the logical design of a stored program. Methods of user-centered design are used to ensure that the language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to as chrome or GUI, typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of an interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows a structure in which the interface is independent from and indirectly linked to application functions. This allows users to select or design a different skin at will, good user interface design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of a market as application-specific graphical user interfaces. By the 1990s, cell phones and handheld game systems also employed application specific touchscreen GUIs, newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. Sample graphical desktop environments A GUI uses a combination of technologies and devices to provide a platform that users can interact with, a series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with, the most common combination of such elements in GUIs is the windows, icons, menus, pointer paradigm, especially in personal computers. The WIMP style of interaction uses a virtual device to represent the position of a pointing device, most often a mouse. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device, a window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, window managers and other software combine to simulate the desktop environment with varying degrees of realism. Smaller mobile devices such as personal assistants and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space

8.
Kernel (operating system)
–
The kernel is a computer program that is the core of a computers operating system, with complete control over everything in the system. It is the first program loaded on start-up and it handles the rest of start-up as well as input/output requests from software, translating them into data-processing instructions for the central processing unit. It handles memory and peripherals like keyboards, monitors, printers, the critical code of the kernel is usually loaded into a protected area of memory, which prevents it from being overwritten by applications or other, more minor parts of the operating system. The kernel performs its tasks, such as running processes and handling interrupts, in contrast, everything a user does is in user space, writing text in a text editor, running programs in a GUI, etc. This separation prevents user data and kernel data from interfering with other and causing instability. The kernels interface is an abstraction layer. When a process makes requests of the kernel, it is called a system call, Kernel designs differ in how they manage these system calls and resources. A monolithic kernel runs all the operating instructions in the same address space. A microkernel runs most processes in space, for modularity. The kernel takes responsibility for deciding at any time which of the running programs should be allocated to the processor or processors. Random-access memory Random-access memory is used to both program instructions and data. Typically, both need to be present in memory in order for a program to execute, often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, input/output devices I/O devices include such peripherals as keyboards, mice, disk drives, printers, network adapters, and display devices. The kernel allocates requests from applications to perform I/O to an appropriate device, key aspects necessary in resource management are the definition of an execution domain and the protection mechanism used to mediate the accesses to the resources within a domain. Kernels also usually provide methods for synchronization and communication between processes called inter-process communication, finally, a kernel must provide running programs with a method to make requests to access these facilities. The kernel has full access to the memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation, virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. This allows every program to behave as if it is the one running

9.
Syntax
–
In linguistics, syntax is the set of rules, principles, and processes that govern the structure of sentences in a given language, specifically word order. The term syntax is used to refer to the study of such principles and processes. The goal of many syntacticians is to discover the syntactic rules common to all languages, in mathematics, syntax refers to the rules governing the behavior of mathematical systems, such as formal languages used in logic. The word syntax comes from Ancient Greek, σύνταξις coordination, which consists of σύν syn, together, and τάξις táxis, a basic feature of a languages syntax is the sequence in which the subject, verb, and object usually appear in sentences. Over 85% of languages usually place the subject first, either in the sequence SVO or the sequence SOV, the other possible sequences are VSO, VOS, OVS, and OSV, the last three of which are rare. In the West, the school of thought came to be known as traditional grammar began with the work of Dionysius Thrax. For centuries, work in syntax was dominated by a known as grammaire générale. This system took as its premise the assumption that language is a direct reflection of thought processes and therefore there is a single. It became apparent that there was no such thing as the most natural way to express a thought, the Port-Royal grammar modeled the study of syntax upon that of logic. Syntactic categories were identified with logical ones, and all sentences were analyzed in terms of Subject – Copula – Predicate, initially, this view was adopted even by the early comparative linguists such as Franz Bopp. The central role of syntax within theoretical linguistics became clear only in the 20th century, there are a number of theoretical approaches to the discipline of syntax. One school of thought, founded in the works of Derek Bickerton, sees syntax as a branch of biology, other linguists take a more Platonistic view, since they regard syntax to be the study of an abstract formal system. Yet others consider syntax a taxonomical device to reach broad generalizations across languages, the hypothesis of generative grammar is that language is a structure of the human mind. The goal of grammar is to make a complete model of this inner language. This model could be used to all human language and to predict the grammaticality of any given utterance. This approach to language was pioneered by Noam Chomsky, most generative theories assume that syntax is based upon the constituent structure of sentences. Generative grammars are among the theories that focus primarily on the form of a sentence and this complex category is notated as instead of V. NP\S is read as a category that searches to the left for an NP and outputs a sentence. The category of verb is defined as an element that requires two NPs to form a sentence

10.
Bash (Unix shell)
–
Bash is a Unix shell and command language written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. First released in 1989, it has been distributed widely as the shell for Linux distributions. A version is available for Windows 10, Bash is a command processor that typically runs in a text window, where the user types commands that cause actions. Bash can also read commands from a file, called a script, like all Unix shells, it supports filename globbing, piping, here documents, command substitution, variables and control structures for condition-testing and iteration. The keywords, syntax and other features of the language are all copied from sh. Other features, e. g. history, are copied from csh and ksh, Bash is a POSIX shell, but with a number of extensions. A security hole in Bash dating from version 1.03, dubbed Shellshock, was discovered in early September 2014, patches to fix the bugs were made available soon after the bugs were identified, but not all computers have yet been updated. Brian Fox began coding Bash on January 10,1988 after Richard Stallman became dissatisfied with the lack of progress being made by a prior developer, Fox released Bash as a beta, version. Since then, Bash has become by far the most popular shell among users of Linux, becoming the default shell on that operating systems various distributions. Bash has also ported to Microsoft Windows and distributed with Cygwin and MinGW, to DOS by the DJGPP project, to Novell NetWare. In September 2014, Stéphane Chazelas, a Unix/Linux, network and telecom specialist working in the UK, the bug, first disclosed on September 24, was named Shellshock and assigned the numbers CVE-2014-6271, CVE-2014-6277 and CVE-2014-7169. The bug was regarded as severe, since CGI scripts using Bash could be vulnerable, the bug was related to how Bash passes function definitions to subshells through environment variables. The Bash command syntax is a superset of the Bourne shell command syntax, when a user presses the tab key within an interactive command-shell, Bash automatically uses command line completion to match partly typed program names, filenames and variable names. The Bash command-line completion system is flexible and customizable, and is often packaged with functions that complete arguments and filenames for specific programs. Bashs syntax has many extensions lacking in the Bourne shell, Bash can perform integer calculations without spawning external processes. It uses the command and the $ variable syntax for this purpose, for example, it can redirect standard output and standard error at the same time using the &> operator. This is simpler to type than the Bourne shell equivalent command > file 2>&1, Bash supports process substitution using the < and >syntax, which substitutes the output of a command where a filename is normally used. But in POSIX mode, Bash conforms with POSIX more closely, since version 2. 05b Bash can redirect standard input from a here string using the <<< operator

11.
Refreshable braille display
–
A refreshable braille display or braille terminal is an electro-mechanical device for displaying braille characters, usually by means of round-tipped pins raised through holes in a flat surface. Blind computer users who use a computer monitor can use it to read text output. Speech synthesizers are also used for the same task. Deafblind computer users may also use refreshable braille displays, the base of a refreshable braille display often integrates a pure braille keyboard. Other variants exist that use a conventional QWERTY keyboard for input and braille pins for output, on some models the position of the cursor is represented by vibrating the dots, and some models have a switch associated with each cell to move the cursor to that cell directly. The mechanism which raises the dots uses the effect of some crystals. Such a crystal is connected to a lever, which in turn raises the dot, there has to be a crystal for each dot of the display, i. e. eight per character. Because of the complexity of producing a reliable display that will cope with daily wear and tear, usually, only 40 or 80 braille cells are displayed. Models with between 18 and 40 cells exist in some notetaker devices, the software that controls the display is called a screen reader. It gathers the content of the screen from the system, converts it into braille characters. Screen readers for graphical operating systems are complex, because graphical elements like windows or slidebars have to be interpreted and described in text form. A rotating-wheel Braille display was developed in 2000 by the National Institute of Standards and Technology, both wheels are still in the process of commercialization. In these units, braille dots are put on the edge of a spinning wheel, the braille dots are set in a simple scanning-style fashion as the dots on the wheel spin past a stationary actuator that sets the braille characters. As a result, manufacturing complexity is reduced and rotating-wheel braille displays. Designs for a full braille computer monitor have been patented but not yet produced, a full page braille display with 1,000 cells was developed in 2015 by the Tactisplay Corp. With total 12,000 pixels configured in 120*100, it can show any BANA compatible braille graphic page in 8 seconds and this video shows operation of the device. Tactile is a real-time text-to-braille translation device currently under development at the Massachusetts Institute of Technology, GNOME accessibility NIST, Converting Digital Information to Braille at the Wayback Machine Information on Bi-directional Refreshable Tactile Display US Patent 6,692,255

12.
File manager
–
A file manager or file browser is a computer program that provides a user interface to manage files and folders. Folders and files may be displayed in a tree based on their directory structure. Some file managers contain features inspired by web browsers, including forward, some file managers provide network connectivity via protocols, such as FTP, NFS, SMB or WebDAV. This is achieved by allowing the user to browse for a server or by providing its own full client implementations for file server protocols. A term that predates the usage of file manager is directory editor, the term was used by other developers, including Jay Lepreau, who wrote the dired program in 1980, which ran on BSD. This was in inspired by an older program with the same name running on TOPS-20. Dired inspired other programs, including dired, the editor script, file-list file managers are lesser known and older than orthodox file managers. One such file manager is flist, which was first used in 1981 on the Conversational Monitor System and this is a variant of fulist, which originated before late 1978, according to comments by its author, Theo Alkema. The flist program provided a list of files in the users minidisk, the file attributes could be passed to scripts or function-key definitions, making it simple to use flist as part of CMS EXEC, EXEC2 or XEDIT scripts. Orthodox file managers or command-based file managers are text-menu based file managers, Orthodox file managers are one of the longest running families of file managers, preceding Graphical User Interface-based types. Developers create applications that duplicate and extend the manager that was introduced by PathMinder, the concept is more than thirty years old—PathMinder was released in 1984, and Norton Commander version 1.0 was released in 1986. Despite the age of this concept, file managers based on Norton Commander are actively developed, and dozens of implementations exist for DOS, Unix, Nikolai Bezroukov publishes his own set of criteria for an OFM standard. An orthodox file manager typically has three windows, two of the windows are called panels and are positioned symmetrically at the top of the screen. The third is the line, which is essentially a minimized command window that can be expanded to full screen. Only one of the panels is active at a given time, the active panel contains the file cursor. Panels are resizable and can be hidden, files in the active panel serve as the source of file operations performed by the manager. For example, files can be copied or moved from the panel to the location represented in the passive panel. This scheme is most effective for systems in which the keyboard is the primary or sole input device, the active panel shows information about the current working directory and the files that it contains

13.
Process (computing)
–
In computing, a process is an instance of a computer program that is being executed. It contains the code and its current activity. Depending on the system, a process may be made up of multiple threads of execution that execute instructions concurrently. A computer program is a collection of instructions, while a process is the actual execution of those instructions. Several processes may be associated with the program, for example. Multitasking is a method to allow processes to share processors. Each CPU executes a task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. Depending on the system implementation, switches could be performed when tasks perform input/output operations. A common form of multitasking is time-sharing, time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor and this seeming execution of multiple processes simultaneously is called concurrency. In general, a system process consists of the following resources. Memory, which includes the code, process-specific data, a call stack. Operating system descriptors of resources that are allocated to the process, such as file descriptors or handles, security attributes, such as the process owner and the process set of permissions. Processor state, such as the content of registers and physical memory addressing, the state is typically stored in computer registers when the process is executing, and in memory otherwise. The operating system holds most of this information about processes in data structures called process control blocks. Any subset of the resources, typically at least the processor state, the operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures. The operating system may also provide mechanisms for communication to enable processes to interact in safe

14.
Application software
–
An application program is a computer program designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a processor, a spreadsheet, an accounting application, a web browser, a media player, an aeronautical flight simulator. The collective noun application software refers to all applications collectively and this contrasts with system software, which is mainly involved with running the computer. Applications may be bundled with the computer and its software or published separately. Apps built for mobile platforms are called mobile apps, in information technology, an application is a computer program designed to help people perform an activity. An application thus differs from a system, a utility. Depending on the activity for which it was designed, an application can manipulate text, numbers, graphics, some application packages focus on a single task, such as word processing, others, called integrated software include several applications. User-written software tailors systems to meet the specific needs. User-written software includes templates, word processor macros, scientific simulations, graphics. Even email filters are a kind of user software, users create this software themselves and often overlook how important it is. The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. As another example, the GNU/Linux naming controversy is, in part, the above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app, see Application Portfolio Management, the word application, once used as an adjective, is not restricted to the of or pertaining to application software meaning. Sometimes a new and popular application arises which only runs on one platform and this is called a killer application or killer app. There are many different ways to divide up different types of application software, web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated, Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every aspect possible of, for example, manufacturing or banking systems, or accounting

15.
Peripheral
–
A peripheral is an ancillary device used to put information into and get information out of the computer. Touchscreens are an example that combines different devices into a hardware component that can be used both as an input and output device. A peripheral device is defined as any auxiliary device such as a computer mouse or keyboard that connects to. Other examples of peripherals are image scanners, tape drives, microphones, loudspeakers, webcams, common input peripherals include keyboards, computer mice, graphic tablets, touchscreens, barcode readers, image scanners, microphones, webcams, game controllers, light pens, and digital cameras. Common output peripherals include computer displays, printers, projectors, computer hardware Controller Display device Expansion card Punched card input/output Punched tape Video game accessory

16.
Multi-user
–
Multi-user software is software that allows access by multiple users of a computer. Most batch processing systems for mainframe computers may also be considered multi-user, however, the term multitasking is more common in this context. An example is a Unix server where multiple remote users have access to the Unix shell prompt at the same time, another example uses multiple X Window sessions spread across multiple terminals powered by a single machine - this is an example of the use of thin client. Similar functions were available under MP/M, Concurrent DOS, Multiuser DOS and FlexOS. The operating system provides isolation of each users processes from other users, management systems are implicitly designed to be used by multiple users, typically one system administrator or more and an end-user community. Multi-user operating systems such as Unix sometimes have a user mode or runlevel available for emergency maintenance. Multiseat Interix in a Multi-User Windows TSE Environment paper about the Unix multi-user model and MS-Windows NT TSE

17.
Mainframe computer
–
The term originally referred to the large cabinets called main frames that housed the central processing unit and main memory of early computers. Later, the term was used to distinguish high-end commercial machines from less powerful units, most large-scale computer system architectures were established in the 1960s, but continue to evolve. Their high stability and reliability enable these machines to run uninterrupted for decades, Mainframes are defined by high availability, one of the main reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability is a characteristic of mainframe computers. Proper planning and implementation is required to exploit these features, and if improperly implemented, in the late 1950s, most mainframes had no explicitly interactive interface, but only accepted sets of punched cards, paper tape, or magnetic tape to transfer data and programs. In cases where interactive terminals were supported, these were used almost exclusively for applications rather than program development, typewriter and Teletype devices were also common control consoles for system operators through the 1970s, although ultimately supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphical terminals, and terminal emulation and this format of end-user computing reached mainstream obsolescence in the 1990s due to the advent of personal computers provided with GUIs. After 2000, most modern mainframes have partially or entirely phased out classic green screen terminal access for end-users in favour of Web-style user interfaces, the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes could reduce data center energy costs for power and cooling, modern mainframes can run multiple different instances of operating systems at the same time. This technique of virtual machines allows applications to run as if they were on physically distinct computers, in this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, Mainframes are designed to handle very high volume input and output and emphasize throughput computing. Since the late-1950s, mainframe designs have included subsidiary hardware which manage the I/O devices and it is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual, compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing, Mainframes also have execution integrity characteristics for fault tolerant computing. This hardware-level feature, also found in HPs NonStop systems, is known as lock-stepping, not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing

18.
Computer terminal
–
A computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. The function of a terminal is confined to display and input of data, a terminal that depends on the host computer for its processing power is called a dumb terminal or thin client. A personal computer can run terminal emulator software that replicates the function of a terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system. The terminal of the first working programmable, fully automatic digital Turing-complete computer, the Z3, had a keyboard, early user terminals connected to computers were electromechanical teleprinters/teletypewriters, such as the Teletype Model 33 ASR, originally used for telegraphy or the Friden Flexowriter. Later printing terminals such as the DECwriter LA30 were developed, however printing terminals were limited by the speed at which paper could be printed, and for interactive use the paper record was unnecessary. The problem was that the amount of memory needed to store the information on a page of text was comparable to the memory in low end minicomputers then in use. Displaying the information at video speeds was also a challenge and the control logic took up a rack worth of pre-integrated circuit electronics. Another approach involved the use of the tube, a specialized CRT developed by Tektronix that retained information written on it without the need to refresh. The Datapoint 3300 from Computer Terminal Corporation was announced in 1967 and shipped in 1969 and it solved the memory space issue mentioned above by using a digital shift-register design, and using only 72 columns rather than the later more common choice of 80. Some type of blinking cursor that can be positioned, the term intelligent in this context dates from 1969. Notable examples include the IBM2250 and IBM2260, predecessors to the IBM3270, providing even more processing possibilities, workstations like the Televideo TS-800 could run CP/M-86, blurring the distinction between terminal and Personal Computer. Most terminals were connected to minicomputers or mainframe computers and often had a green or amber screen, typically terminals communicate with the computer via a serial port via a null modem cable, often using an EIA RS-232 or RS-422 or RS-423 or a current loop serial interface. In fact, the design for the Intel 8008 was originally conceived at Computer Terminal Corporation as the processor for the Datapoint 2200. While early IBM PCs had single color green screens, these screens were not terminals. The screen of a PC did not contain any character generation hardware, all signals and video formatting were generated by the video display card in the PC, or by the CPU. An IBM PC monitor, whether it was the monochrome display or the 16-color display, was technically much more similar to an analog TV set than to a terminal. With suitable software a PC could, however, emulate a terminal, the Data General One could be booted into terminal emulator mode from its ROM. Since the advent and subsequent popularization of the computer, few genuine hardware terminals are used to interface with computers today

19.
Serial port
–
In computing, a serial port is a serial communication interface through which information transfers in or out one bit at a time. Throughout most of the history of computers, data was transferred through serial ports to devices such as modems, terminals. Modern computers without serial ports may require serial-to-USB converters to allow compatibility with RS-232 serial devices, serial ports are still used in applications such as industrial automation systems, scientific instruments, point of sale systems and some industrial and consumer products. Server computers may use a port as a control console for diagnostics. Network equipment often use serial console for configuration, serial ports are still used in these areas as they are simple, cheap and their console functions are highly standardized and widespread. A serial port requires very little supporting software from the host system, some computers, such as the IBM PC, use an integrated circuit called a UART. This IC converts characters to and from asynchronous serial form, implementing the timing and framing of data in hardware, very low-cost systems, such as some early home computers, would instead use the CPU to send the data through an output pin, using the bit-banging technique. Early home computers often had proprietary serial ports with pinouts and voltage levels incompatible with RS-232, low-cost processors now allow higher-speed, but more complex, serial communication standards such as USB and FireWire to replace RS-232. These make it possible to connect devices that would not have operated feasibly over slower serial connections, such as storage, sound. Many personal computer motherboards still have at least one serial port, small-form-factor systems and laptops may omit RS-232 connector ports to conserve space, but the electronics are still there. RS-232 has been standard for so long that the circuits needed to control a serial port became very cheap and often exist on a single chip, sometimes also with circuitry for a parallel port. The individual signals on a serial port are unidirectional and when connecting two devices the outputs of one device must be connected to the inputs of the other, devices are divided into two categories data terminal equipment and data circuit-terminating equipment. A line that is an output on a DTE device is an input on a DCE device and vice versa so a DCE device can be connected to a DTE device with a straight wired cable, conventionally, computers and terminals are DTE while modems and peripherals are DCE. If it is necessary to connect two DTE devices a cross-over null modem, in the form of either an adapter or a cable, generally, serial port connectors are gendered, only allowing connectors to mate with a connector of the opposite gender. With D-subminiature connectors, the male connectors have protruding pins, either type of connector can be mounted on equipment or a panel, or terminate a cable. Connectors mounted on DTE are likely to be male, and those mounted on DCE are likely to be female, however, this is far from universal, for instance, most serial printers have a female DB25 connector, but they are DTEs. The desire to supply serial interface cards with two ports required that IBM reduce the size of the connector to fit onto a single card back panel, a DE-9 connector also fits onto a card with a second DB-25 connector. Starting around the time of the introduction of the IBM PC-AT, serial ports were built with a 9-pin connector to save cost

20.
Modem
–
A modem is a network hardware device that modulates one or more carrier wave signals to encode digital information for transmission and demodulates signals to decode the transmitted information. The goal is attempting to produce a signal that can be transmitted easily, modems can be used with any means of transmitting analog signals, from light emitting diodes to radio. Modems are generally classified by the amount of data they can send in a given unit of time, usually expressed in bits per second. Modems can also be classified by their rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency shift keying with two frequencies, corresponding to two distinct symbols, to carry 300 bits per second using 300 baud. By contrast, the original ITU V.22 standard, which could transmit and receive four distinct symbols, news wire services in the 1920s used multiplex devices that satisfied the definition of a modem. However, the function was incidental to the multiplexing function, so they are not commonly included in the history of modems. S. SAGE modems were described by AT&Ts Bell Labs as conforming to their newly published Bell 101 dataset standard, while they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101,110 baud modems. The 201A and 201B Data-Phones were synchronous modems using two-bit-per-baud phase-shift keying, the famous Bell 103A dataset standard was also introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines, frequency-shift keying was used, with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz. The readily available 103A2 gave an important boost to the use of remote low-speed terminals such as the Teletype Model 33 ASR and KSR, AT&T reduced modem costs by introducing the originate-only 113D and the answer-only 113B/C modems. For many years, the Bell System maintained a monopoly on the use of its phone lines, however, the seminal Hush-a-Phone v. FCC case of 1956 concluded it was within the FCCs jurisdiction to regulate the operation of the Bell System. The FCC found that as long as a device was not electronically attached to the system and this led to a number of devices that mechanically connected to the phone through a standard handset. Since most handsets were supplied by Western Electric and thus of a standard design and this type of connection was used for many devices, such as answering machines. Acoustically coupled Bell 103A-compatible 300 bit/s modems were common during the 1970s, well-known models included the Novation CAT and the Anderson-Jacobson, the latter spun off from an in-house project at Stanford Research Institute. An even lower-cost option was the Pennywhistle modem, designed to be built using parts from electronics scrap, in December 1972, Vadic introduced the VA3400, notable for full-duplex operation at 1,200 bit/s over the phone network. Like the 103A, it used different frequency bands for transmit, in November 1976, AT&T introduced the 212A modem to compete with Vadic

21.
Unix-like
–
A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell, there is no standard for defining the term, and some difference of opinion is possible as to the degree to which a given operating system or application is Unix-like. The Open Group owns the UNIX trademark and administers the Single UNIX Specification and they do not approve of the construction Unix-like, and consider it a misuse of their trademark. Other parties frequently treat Unix as a genericized trademark, in 2007, Wayne R. Gray sued to dispute the status of UNIX as a trademark, but lost his case, and lost again on appeal, with the court upholding the trademark and its ownership. Unix-like systems started to appear in the late 1970s and early 1980s, many proprietary versions, such as Idris, UNOS, Coherent, and UniFlex, aimed to provide businesses with the functionality available to academic users of UNIX. These largely displaced the proprietary clones, growing incompatibility among these systems led to the creation of interoperability standards, including POSIX and the Single UNIX Specification. Various free, low-cost, and unrestricted substitutes for UNIX emerged in the 1980s and 1990s, including 4. 4BSD, Linux, some of these have in turn been the basis for commercial Unix-like systems, such as BSD/OS and OS X. The various BSD variants are notable in that they are in fact descendants of UNIX, however, the BSD code base has evolved since then, replacing all of the AT&T code. Since the BSD variants are not certified as compliant with the Single UNIX Specification, dennis Ritchie, one of the original creators of Unix, expressed his opinion that Unix-like systems such as Linux are de facto Unix systems. Eric S. Raymond and Rob Landley have suggested there are three kinds of Unix-like systems, Genetic UNIX Those systems with a historical connection to the AT&T codebase. Most commercial UNIX systems fall into this category, so do the BSD systems, which are descendants of work done at the University of California, Berkeley in the late 1970s and early 1980s. Some of these systems have no original AT&T code but can trace their ancestry to AT&T designs. Trademark or branded UNIX These systems‍—‌largely commercial in nature‍—‌have been determined by the Open Group to meet the Single UNIX Specification and are allowed to carry the UNIX name, many ancient UNIX systems no longer meet this definition. Around 2001, Linux was given the opportunity to get a certification including free help from the POSIX chair Andrew Josey for the price of one dollar. Some non-Unix-like operating systems provide a Unix-like compatibility layer, with degrees of Unix-like functionality. IBM z/OSs UNIX System Services is sufficiently complete to be certified as trademark UNIX, cygwin and MSYS both provide a GNU environment on top of the Microsoft Windows user API, sufficient for most common open source software to be compiled and run. Subsystem for Unix-based Applications provides Unix-like functionality as a Windows NT subsystem, Windows Subsystem for Linux provides a Linux-compatible kernel interface developed by Microsoft and containing no Linux code, with Ubuntu user-mode binaries running on top of it

22.
Secure Shell
–
Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. The best known application is for remote login to computer systems by users. SSH provides a channel over an unsecured network in a client-server architecture. Common applications include remote command-line login and remote command execution, the protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The most visible application of the protocol is for access to accounts on Unix-like operating systems. In 2015, Microsoft announced that they would include support for SSH in a future release. SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin, rsh and those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis. SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, there are several ways to use SSH, one is to use automatically generated public-private key pairs to simply encrypt a network connection, and then use password authentication to log on. Another is to use a manually generated public-private key pair to perform the authentication, in this scenario, anyone can produce a matching pair of different keys. The public key is placed on all computers that must allow access to the owner of the private key. While authentication is based on the key, the key itself is never transferred through the network during authentication. SSH only verifies whether the person offering the public key also owns the matching private key. In all versions of SSH it is important to verify unknown public keys, i. e. associate the public keys with identities, accepting an attackers public key without validation will authorize an unauthorized attacker as a valid user. On Unix-like systems, the list of authorized public keys is typically stored in the directory of the user that is allowed to log in remotely. This file is respected by SSH only if it is not writable by anything apart from the owner, when the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for security the private key itself can be locked with a passphrase. The private key can also be looked for in standard places, the ssh-keygen utility produces the public and private keys, always in pairs. SSH also supports password-based authentication that is encrypted by automatically generated keys, in this case the attacker could imitate the legitimate server side, ask for the password, and obtain it

23.
Tunneling protocol
–
In computer networks, a tunneling protocol allows a network user to access or provide a network service that the underlying network does not support or provide directly. One important use of a protocol is to allow a foreign protocol to run over a network that does not support that particular protocol, for example. Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, the tunneling protocol works by using the data portion of a packet to carry the packets that actually provide the service. Typically, the delivery protocol operates at an equal or higher level in the model than the payload protocol. To understand a particular protocol stack imposed by tunneling, network engineers must understand both the payload and delivery protocol sets, in this case, the delivery and payload protocols are the same, but the payload addresses are incompatible with those of the delivery network. It is also possible to establish a connection using the link layer. The Layer 2 Tunneling Protocol allows the transmission of frames between two nodes, a tunnel is not encrypted by default, it relies on the TCP/IP protocol chosen to determine the level of security. SSH uses port 22 to enable data encryption of payloads being transmitted over a network connection. IPsec has an end-to-end Transport Mode, but can operate in a tunneling mode through a trusted security gateway. A Secure Shell tunnel consists of a tunnel created through an SSH protocol connection. Users may set up SSH tunnels to transfer unencrypted traffic over a network through an encrypted channel, for example, Microsoft Windows machines can share files using the Server Message Block protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, to mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security, to set up a local SSH tunnel, one configures an SSH client to forward a specified local port to a port on the remote machine. Once the SSH tunnel has been established, the user can connect to the local port to access the network service. The local port does not have to be the same as the remote port, SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services – so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages directly without passing through the proxy filter. But users may not wish to have their web traffic monitored or blocked by the proxy filter. If users can connect to an external SSH server, they can create an SSH tunnel to forward a given port on their machine to port 80 on a remote web server

24.
X Window System
–
The X Window System is a windowing system for bitmap displays, common on UNIX-like computer operating systems. X provides the framework for a GUI environment, drawing and moving windows on the display device and interacting with a mouse. X does not mandate the user interface – this is handled by individual programs, as such, the visual styling of X-based environments varies greatly, different programs may present radically different interfaces. X originated at the Massachusetts Institute of Technology in 1984, the protocol has been version 11 since September 1987. The X. Org Foundation leads the X project, with the current reference implementation, X. Org Server, available as free and open source software under the MIT License, X is an architecture-independent system for remote graphical user interfaces and input device capabilities. Each person using a terminal has the ability to interact with the display with any type of user input device. X provides the framework, or primitives, for building such GUI environments, drawing and moving windows on the display and interacting with a mouse. X does not mandate the user interface, individual client programs handle this, programs may use Xs graphical abilities with no user interface. As such, the styling of X-based environments varies greatly. Unlike most earlier display protocols, X was specifically designed to be used over network connections rather than on an integral or attached display device. X features network transparency, which means an X program running on a computer somewhere on a network can display its user interface on an X server running on other computer on the network. The fact that the server is applied to the software in front of the user is often surprising to users accustomed to their programs being clients to services on remote computers. Xs network protocol is based on X command primitives and this approach allows both 2D and 3D operations by an X client application which might be running on a different computer to still be fully accelerated on the X servers display. X provides no support for audio, several projects exist to fill this niche. X uses a model, an X server communicates with various client programs. The server accepts requests for graphical output and sends back user input, a client and server can even communicate securely over the Internet by tunneling the connection over an encrypted network session. An X client itself may emulate an X server by providing services to other clients. This is known as X nesting, open-source clients such as Xnest and Xephyr support such X nesting

25.
Windows Vista
–
Windows Vista is an operating system by Microsoft for use on personal computers, including home and business desktops, laptops, tablet PCs and media center PCs. Development was completed on 8 November 2006, and over the three months, it was released in stages to computer hardware and software manufacturers, business customers. On 30 January 2007, it was released worldwide and was available for purchase. It was succeeded by Windows 7, which was released to manufacturing on 22 July 2009, Vista aimed to increase the level of communication between machines on a home network, using peer-to-peer technology to simplify sharing files and media between computers and devices. Windows Vista included version 3.0 of the. NET Framework, Microsofts primary stated objective with Windows Vista was to improve the state of security in the Windows operating system. One common criticism of Windows XP and its predecessors was their commonly exploited security vulnerabilities and overall susceptibility to malware, viruses, Microsoft stated that it prioritized improving the security of Windows XP and Windows Server 2003 above finishing Windows Vista, thus delaying its completion. While these new features and security improvements have garnered positive reviews, Vista has also been the target of much criticism, as a result of these and other issues, Windows Vista had seen initial adoption and satisfaction rates lower than Windows XP. In May 2010, Windows Vistas market share had a range from 15% to 26%. On 22 October 2010, Microsoft ceased sales of copies of Windows Vista. As of March 2017, Vistas market share was 0. 72%, Microsoft stopped providing mainstream support for Windows Vista on 10 April 2012. Extended support will end on 11 April 2017, Microsoft began work on Windows Vista, known at the time by its codename Longhorn, in May 2001, five months before the release of Windows XP. It was originally expected to ship sometime late in 2003 as a step between Windows XP and Blackcomb, which was planned to be the companys next major operating system release. Gradually, Longhorn assimilated many of the important new features and technologies slated for Blackcomb, in some builds of Longhorn, their license agreement said For the Microsoft product codenamed Whistler. Many of Microsofts developers were also re-tasked to build updates to Windows XP, faced with ongoing delays and concerns about feature creep, Microsoft announced on 27 August 2004, that it had revised its plans. Longhorn became known as Vista in 2005, the early development stages of Longhorn were generally characterized by incremental improvements and updates to Windows XP. After several months of relatively little news or activity from Microsoft with Longhorn, Microsoft released Build 4008 and it was also privately handed out to a select group of software developers. An optional new taskbar was introduced that was thinner than the previous build, the most notable visual and functional difference, however, came with Windows Explorer. The incorporation of the Plex theme made blue the dominant color of the entire application, the Windows XP-style task pane was almost completely replaced with a large horizontal pane that appeared under the toolbars

26.
PowerShell
–
PowerShell is a task automation and configuration management framework from Microsoft, consisting of a command-line shell and associated scripting language built on the. NET Framework. Initially a Windows component only, PowerShell was made open-source and cross-platform on 18 August 2016, in PowerShell, administrative tasks are generally performed by cmdlets, which are specialized. NET classes implementing a particular operation. Sets of cmdlets may be combined into scripts, executables, or by instantiating regular. NET classes and these work by accessing data in different data stores, like the file system or registry, which are made available to the PowerShell runtime via PowerShell providers. PowerShell also provides a hosting API with which the PowerShell runtime can be embedded inside other applications and these applications can then use PowerShell functionality to implement certain operations, including those exposed via the graphical interface. Other Microsoft applications including Microsoft SQL Server 2008 also expose their management interface via PowerShell cmdlets, PowerShell includes its own extensive, console-based help, similar to man pages in Unix shells, via the Get-Help cmdlet. Local help contents can be retrieved from the Internet via Update-Help cmdlet, alternatively, help from the web can be acquired on a case-by-case basis via the -online switch to Get-Help. Every released version of Microsoft DOS and Microsoft Windows for personal computers has included a command-line interface tool, the shell is a command line interpreter that supports a few basic commands. For other purposes, a console application must be invoked from the shell. The shell also includes a language, which can be used to automate various tasks. In Windows Server 2003, the situation was improved, but scripting support was considered unsatisfactory. Microsoft attempted to address some of shortcomings by introducing the Windows Script Host in 1998 with Windows 98. It integrates with the Active Script engine and allows scripts to be written in languages, such as JScript and VBScript. Different versions of Windows provided various special-purpose command line interpreters with their own command sets, none of them were integrated with the command shell, nor were they interoperable. By 2002 Microsoft had started to develop a new approach to command line management, the shell and the ideas behind it were published in August 2002 in a white paper titled Monad Manifesto. Monad was to be a new extensible command shell with a design that would be capable of automating a full range of core administrative tasks. Microsoft first showed off Monad at the Professional Development Conference in Los Angeles in October 2003, a private beta program began a few months later which eventually led to a public beta program. Microsoft published the first Monad public beta release on June 17,2005, Beta 2 on September 11,2005, and Beta 3 on January 10,2006. Not much later, on April 25,2006 Microsoft formally announced that Monad had been renamed Windows PowerShell, Release Candidate 1 of PowerShell was released at the same time

27.
Cmd.exe
–
Command Prompt, also known as cmd. exe or cmd, is the command-line interpreter on Windows NT, Windows CE, OS/2 and eComStation operating systems. It is the counterpart of COMMAND. COM in DOS and Windows 9x systems, the initial version of Command Prompt for Windows NT was developed by Therese Stowell. Command Prompt interacts with the user through a command-line interface, in Windows, this interface is implemented through Win32 console. Command Prompt may take advantage of available to native programs of its own platform. For example, in OS/2, it can use real pipes in command pipelines, as a result, it is possible to redirect the standard error stream. In Windows, Command Prompt is compatible with COMMAND. COM but provides the following extensions over it, in the OS/2, errors are reported in the chosen language of the system, their text being taken from the system message files. The HELP command can then be issued with the error message number to obtain further information, supports using of arrow keys to scroll through command history. This function was available to COMMAND. COM via an external component called DOSKEY. Adds command-line completion for file and folder paths Treats the Caret character as the character, in other words. There special characters in Command Prompt and COMMAND. COM that are part of the syntax and, if specified without caret, supports delayed variable expansion, fixing DOS idioms that made using control structures hard and complex. The extensions can be disabled, providing a stricter compatibility mode, internal commands have also been improved, The DelTree command was merged into the RD command, as part of its /S switch. SetLocal and EndLocal commands limit the scope of changes to the environment, changes made to the command line environment after SetLocal commands are local to the batch file. EndLocal command restores the previous settings, the Call command allows subroutines within batch file. The Call command in COMMAND. COM only supports calling external batch files, File name parser extensions to the Set command are comparable with C shell. The Set command can perform expression evaluation, an expansion of the For command supports parsing files and arbitrary sets in addition to file names. The new PushD and PopD commands provide access past navigated paths similar to forward, the conditional IF command can perform case-insensitive comparisons and numeric equality and inequality comparisons in addition to case-sensitive string comparisons. This was available in DR-DOS but not in PC DOS or MS-DOS

28.
Unix shell
–
A Unix shell is a command-line interpreter or shell that provides a traditional Unix-like command line user interface. Users direct the operation of the computer by entering commands as text for a command line interpreter to execute, users typically interact with a Unix shell using a terminal emulator, however, direct operation via serial hardware connections, or networking session, are common for server systems. All Unix shells provide filename wildcarding, piping, here documents, command substitution, variables and control structures for condition-testing, the most generic sense of the term shell means any program that users employ to type commands. In Unix-like operating systems, users typically have many choices of command-line interpreters for interactive sessions, when a user logs in to the system interactively, a shell program is automatically executed for the duration of the session. The Unix shell is both a command language as well as a scripting programming language, and is used by the operating system as the facility to control the execution of the system. Shells created for operating systems often provide similar functionality. On hosts with a system, like macOS, some users may never use the shell directly. However, some vendors have replaced the traditional shell-based startup system with different approaches. The first Unix shell was the Thompson shell, sh, written by Ken Thompson at Bell Labs and distributed with Versions 1 through 6 of Unix, though not in current use, it is still available as part of some Ancient UNIX Systems. It was modeled after the Multics shell, itself modeled after the RUNCOM program Louis Pouzin showed to the Multics Team, the rc suffix on some Unix configuration files, is a remnant of the RUNCOM ancestry of Unix shells. The PWB shell or Mashey shell, sh, was a version of the Thompson shell, augmented by John Mashey and others and distributed with the Programmers Workbench UNIX. It focused on making shell programming practical, especially in large shared computing centers and it added shell variables, user-executable shell scripts, and interrupt-handling. Control structures were extended from if/goto to if/then/else/endif, switch/breaksw/endsw, as shell programming became widespread, these external commands were incorporated into the shell itself for performance. But the most widely distributed and influential of the early Unix shells were the Bourne shell, both shells have been used as the coding base and model for many derivative and work-alike shells with extended feature sets. The Bourne shell, sh, was a rewrite by Stephen Bourne at Bell Labs. The language, including the use of a keyword to mark the end of a block, was influenced by ALGOL68. Traditionally, the Bourne shell program name is sh and its path in the Unix file system hierarchy is /bin/sh, but a number of compatible work-alikes are also available with various improvements and additional features. The sh of FreeBSD, NetBSD are based on ash that has enhanced to be POSIX conformant for the occasion

29.
Teleprinter
–
A teleprinter is an electromechanical typewriter that can be used to send and receive typed messages from point to point and point-to-multipoint over various types of communications channels. They were adapted to provide an interface to early mainframe computers and minicomputers, sending typed data to the computer. Some models could also be used to create punched tape for data storage, teleprinters could use a variety of different communication media. These included a pair of wires, dedicated non-switched telephone circuits, switched networks that operated similarly to the public telephone network. A teleprinter attached to a modem could also communicate through standard switched public telephone lines and this latter configuration was often used to connect teleprinters to remote computers, particularly in time-sharing environments. Teleprinters have largely replaced by fully electronic computer terminals which usually use a computer monitor instead of a printer. Teleprinters were invented in order to send and receive messages without the need for operators trained in the use of Morse code, a system of two teleprinters, with one operator trained to use a typewriter, replaced two trained Morse code operators. The teleprinter system improved message speed and delivery time, making it possible for messages to be flashed across a country with little manual intervention, in 1835 Samuel Morse devised a recording telegraph. In 1841 Alexander Bain devised a printing telegraph, by 1846, the Morse telegraph service was operational between Washington, D. C. and New York. Royal Earl House patented his printing telegraph that same year and he linked two 28-key piano-style keyboards by wire. Each piano key represented a letter of the alphabet and when pressed caused the letter to print at the receiving end. A shift key gave each main key two optional values, a 56-character typewheel at the sending end was synchronised to coincide with a similar wheel at the receiving end. It was thus an example of a data transmission system. Houses equipment could transmit around 40 instantly readable words per minute, the printer could copy and print out up to 2,000 words per hour. This invention was first put in operation and exhibited at the Mechanics Institute in New York in 1844, landline teleprinter operations began in 1849 when a circuit was put in service between Philadelphia and New York City. In 1855, David Edward Hughes introduced a machine built on the work of Royal Earl House. Émile Baudot designed a system using a five unit code in 1874, the Baudot system was adopted in France in 1877, and later extensively in France. During 1901 Baudots code was modified by Donald Murray, prompted by his development of a typewriter-like keyboard

30.
MS-DOS
–
MS-DOS is a discontinued operating system for x86-based personal computers mostly developed by Microsoft. MS-DOS resulted from a request in 1981 by IBM for a system to use in its IBM PC range of personal computers. Microsoft quickly bought the rights to 86-DOS from Seattle Computer Products, IBM licensed and released it in August 1981 as PC DOS1.0 for use in their PCs. During its life, several competing products were released for the x86 platform and it was also the underlying basic operating system on which early versions of Windows ran as a GUI. It is a operating system, and consumes negligible installation space. MS-DOS was a form of 86-DOS – owned by Seattle Computer Products. This first version was shipped in August 1980, Microsoft, which needed an operating system for the IBM Personal Computer hired Tim Paterson in May 1981 and bought 86-DOS1.10 for $75,000 in July of the same year. Microsoft kept the number, but renamed it MS-DOS. They also licensed MS-DOS1. 10/1.14 to IBM, within a year Microsoft licensed MS-DOS to over 70 other companies. It was designed to be an OS that could run on any 8086-family computer, thus, there were many different versions of MS-DOS for different hardware, and there is a major distinction between an IBM-compatible machine and an MS-DOS machine. This design would have worked well for compatibility, if application programs had only used MS-DOS services to perform device I/O, Microsoft omitted multi-user support from MS-DOS because Microsofts Unix-based operating system, Xenix, was fully multi-user. After the breakup of the Bell System, however, AT&T Computer Systems started selling UNIX System V, believing that it could not compete with AT&T in the Unix market, Microsoft abandoned Xenix, and in 1987 transferred ownership of Xenix to the Santa Cruz Operation. On 25 March 2014, Microsoft made the code to SCP MS-DOS1.25, as an April Fools joke in 2015, Microsoft Mobile launched a Windows Phone application called MS-DOS Mobile which was presented as a new mobile operating system and worked similar to MS-DOS. Version 3.1 – Support for Microsoft Networks Version 3.2 – First version to support 3.5 inch,720 kB floppy drives and diskettes. Version 3.21 Version 3.22 – Version 3.25 Version 3.3 – First version to support 3.5 inch,1.44 MB floppy drives and diskettes, Version 3. 3a Version 3.31 – supports FAT16B and larger drives. MS-DOS4.0 and MS-DOS4.1 – A separate branch of development with additional multitasking features and it is unrelated to any later versions, including versions 4.00 and 4.01 listed below MS-DOS4. x – includes a graphical/mouse interface. It had many bugs and compatibility issues. Version 4.00 – First version to support a hard disk partition that is greater than 32 MiB. Version 4.01 – Microsoft rewritten Version 4.00 released under MS-DOS label, First version to introduce volume serial number when formatting hard disks and floppy disks

31.
Batch files
–
A batch file is a kind of script file in DOS, OS/2 and Microsoft Windows. It consists of a series of commands to be executed by the command-line interpreter, a batch file may contain any command the interpreter accepts interactively and use constructs that enable conditional branching and looping within the batch file, such as if, for, goto and labels. The term batch is from processing, meaning non-interactive execution. When a batch file is run, the program reads the file and executes its commands. Unix-like operating systems, such as Linux, have a similar, the filename extension. bat is used in DOS and Windows. Windows NT and OS/2 also added. cmd, batch files for other environments may have different extensions, e. g. btm in 4DOS, 4OS2 and 4NT related shells. The detailed handling of batch files has changed, some of the detail in this article applies to all batch files, while other details apply only to certain versions. In DOS, a file can be started from the command-line interface by typing its name, followed by any required parameters. When DOS loads, the file AUTOEXEC. BAT, when present, is automatically executed, a. bat file name extension identifies a file containing commands that are executed by the command interpreter COMMAND. Microsoft Windows was introduced in 1985 as a graphical user interface-based overlay on text-based operating systems and was designed to run on DOS. In order to start it, the WIN command was used, in the earlier versions, one could run a. bat type file from Windows in the MS-DOS Prompt. Windows 3. 1x and earlier, as well as Windows 9x invoked COMMAND. COM to run batch files, the IBM OS/2 operating system supported DOS-style batch files. It also included a version of REXX, a more advanced batch-file scripting language. COM, oS/2s batch file interpreter also supports an EXTPROC command. This passes the file to the program named on the EXTPROC file as a data file. The named program can be a file, this is similar to the #. Unlike Windows 98 and earlier, the Windows NT family of operating systems does not depend on MS-DOS, Windows NT introduced an enhanced 32-bit command interpreter that could execute scripts with either the. CMD or. BAT extension. Cmd. exe added additional commands, and implemented existing ones in a different way, so that the same batch file might work differently with cmd. exe. In most cases, operation is identical if the few unsupported commands are not used, Cmd. exes extensions to COMMAND. COM can be disabled for compatibility

32.
COMMAND.COM
–
COMMAND. COM is the default operating system shell for DOS operating systems and the default command line interpreter on Windows 95, Windows 98 and Windows ME. COMMAND. COMs successor on OS/2 and Windows NT systems is CMD. EXE, COMMAND. COM is also available on IA-32 versions of those systems to provide compatibility when running DOS applications within the NTVDM. Programs executed by COMMAND. COM are DOS programs that use the MS-DOS API to communicate with the operating system, as a shell, COMMAND. COM has two distinct modes of work. First is the mode, in which the user types commands which are then executed immediately. The second is the mode, which executes a predefined sequence of commands stored as a text file with the extension. BAT. Internal Commands are commands stored directly inside the COMMAND. COM binary, thus, they can only be executed directly from the command interpreter. All commands are run only after the Enter key is pressed at the end of the line, COMMAND. COM is not case-sensitive, meaning commands can be typed in any mixture of upper and lower case. BREAK Controls the handling of program interruption with Ctrl+C or Ctrl+Break, CHCP Displays or changes the current system code page. CHDIR, CD Changes the current working directory or displays the current directory, COPY Copies one file to another. CTTY Defines the device to use for input and output, DATE Display and set the date of the system. When used on a directory, deletes all files inside the directory only, in comparison, the external command DELTREE deletes all subdirectories and files inside a directory as well as the directory itself. DIR Lists the files in the specified directory, ECHO Toggles whether text is displayed or not. Also displays text on the screen, EXIT Exits from COMMAND. COM and returns to the program which launched it. LFNFOR Enables or disables the return of long filenames by the FOR command, LOADHIGH, LH Loads a program into upper memory. LOCK Enables external programs to perform low-level disk access to a volume, MKDIR, MD Creates a new directory. PATH Displays or changes the value of the PATH environment variable which controls the places where COMMAND. COM will search for executable files, PROMPT Displays or change the value of the PROMPT environment variable which controls the appearance of the prompt. REN, RENAME Renames a file or directory, RMDIR, RD Removes an empty directory. SET Sets the value of an environment variable, Without arguments, TIME Display and set the time of the system

33.
Window (computing)
–
In computing, a window is a graphical control element. It consists of an area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. It usually has a shape that can overlap with the area of other windows. It displays the output of and may allow input to one or more processes, Windows are primarily associated with graphical displays, where they can be manipulated with a pointer by employing some kind of pointing device. Text-only displays can also support windowing, as a way to multiple independent display areas. Text windows are controlled by keyboard, though some also respond to the mouse. A graphical user interface using windows as one of its main metaphors is called a system, whose main components are the display server. The idea was developed at the Stanford Research Institute and their earliest systems supported multiple windows, but there was no obvious way to indicate boundaries between them. Research continued at Xerox Corporations Palo Alto Research Center / PARC, during the 1980s the term WIMP, which stands for window, icon, menu, pointer, was coined at PARC. Apple had worked with PARC briefly at that time, apple developed an interface based on PARCs interface. It was first used on Apples Lisa and later Macintosh computers, Microsoft was developing office applications for the Mac at that time. Some speculate that this gave access to Apples OS before it was released. Windows are two dimensional objects arranged on a called the desktop. In a modern full-featured windowing system they can be resized, moved, hidden, Windows usually include other graphical objects, possibly including a menu-bar, toolbars, controls, icons and often a working area. In the working area, the document, image, folder contents or other object is displayed. Around the working area, within the window, there may be other smaller window areas, sometimes called panes or panels. The working area of a single document interface holds only one main object, child windows in multiple document interfaces, and tabs for example in many web browsers, can make several similar documents or main objects available within a single main application window. Some windows in Mac OS X have a feature called a drawer, applications that can run either under a graphical user interface or in a text user interface may use different terminology

34.
Desktop environment
–
The desktop environment was seen mostly on personal computers until the rise of mobile computing. Desktop GUIs help the user to access and edit files. Instead, the traditional command-line interface is used when full control over the operating system is required. A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers, a GUI might also provide drag and drop functionality and other features that make the desktop metaphor more complete. While the term desktop environment originally described a style of user interfaces following the desktop metaphor and this usage has been popularized by projects such as the Common Desktop Environment, K Desktop Environment, and GNOME. On a system offers a desktop environment, a window manager in conjunction with applications written using a widget toolkit are generally responsible for most of what the user sees. The window manager supports the user interactions with the environment, while the toolkit provides developers a software library for applications with a unified look, a windowing system of some sort generally interfaces directly with the underlying operating system and libraries. This provides support for hardware, pointing devices, and keyboards. The window manager generally runs on top of this windowing system, applications that are created with a particular window manager in mind usually make use of a windowing toolkit, generally provided with the operating system or window manager. A windowing toolkit gives applications access to widgets that allow the user to interact graphically with the application in a consistent way, the first desktop environment was created by Xerox and was sold with the Xerox Alto in the 1970s. The Alto was generally considered by Xerox to be an office computer, it failed in the marketplace because of poor marketing. With the Lisa, Apple introduced a desktop environment on a personal computer. The desktop metaphor was popularized on commercial personal computers by the original Macintosh from Apple in 1984, Microsoft Windows dominates in marketshare among personal computers with a desktop environment. Among the more popular of these are Googles Chromebooks and Chromeboxes, Intels NUC, on tablets and smartphones, the situation is the opposite, with Unix-like operating systems dominating the market, including the iOS, Android, Tizen, Sailfish and Ubuntu. Microsofts Windows phone, Windows RT and Windows 10 are used on a smaller number of tablets. On systems running the X Window System, desktop environments are more dynamic. All these individual modules can be exchanged and independently configured to suit users, not all of the program code that is part of a desktop environment has effects which are directly visible to the user. Some of it may be low-level code, KDE, for example, provides so-called KIO slaves which give the user access to a wide range of virtual devices

Wayland is a computer protocol that specifies the communication between a display server (called a Wayland compositor) …

A screenshot showing xwayland

Image: Wayland demo 2

① The evdev module of the Linux kernel gets an event and sends it to the Wayland compositor. ② The Wayland compositor looks through its scenegraph to determine which window should receive the event. The scenegraph corresponds to what's on screen and the Wayland compositor understands the transformations that it may have applied to the elements in the scenegraph. Thus, the Wayland compositor can pick the right window and transform the screen coordinates to window local coordinates, by applying the inverse transformations. The types of transformation that can be applied to a window is only restricted to what the compositor can do, as long as it can compute the inverse transformation for the input events. ③ As in the X case, when the client receives the event, it updates the UI in response. But in the Wayland case, the rendering happens by the client via EGL, and the client just sends a request to the compositor to indicate the region that was updated. ④ The Wayland compositor collects damage requests from its clients and then re-composites the screen. The compositor can then directly issue an ioctl to schedule a pageflip with KMS.

In the Wayland protocol architecture, a client and a compositor communicate through the Wayland protocol using the reference implementation libraries.

In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers, separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.

The hybrid kernel approach combines the speed and simpler design of a monolithic kernel with the modularity and execution safety of a microkernel.