Unix philosophy

The Unix philosophy, originated by Ken Thompson, is a set of cultural norms and philosophical approaches to minimalist, modularsoftware development. It is based on the experience of leading developers of the Unixoperating system. Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement, over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the "Unix philosophy."

The Unix philosophy emphasizes building simple, short, clear, modular, and extensible code that can be easily maintained and repurposed by developers other than its creators, the Unix philosophy favors composability as opposed to monolithic design.

The UNIX philosophy is documented by Doug McIlroy[1] in the Bell System Technical Journal from 1978:[2]

Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".

Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.

Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.

Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

The development of pipes in 1973 formalized the existing principle of stdin-stdout into a philosophy in Version 3 Unix, with older software rewritten to comply. Previously visible in early utilities such as wc, cat, and uniq, McIlroy cites Thompson's grep as what "ingrained the tools outlook irrevocably" in the operating system, with later tools like tr, m4, and sed imitating how grep transforms the input stream.[5]

Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer, although that philosophy can't be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.

The authors further write that their goal for this book is "to communicate the UNIX programming philosophy."[6]

In October 1984, Brian Kernighan and Rob Pike published a paper called Program Design in the UNIX Environment. In this paper, they criticize the accretion of program options and features found in some newer Unix systems such as 4.2BSD and System V, and explain the Unix philosophy of software tools, each performing one general function:[7]

Much of the power of the UNIX operating system comes from a style of program design that makes programs easy to use and, more important, easy to combine with other programs, this style has been called the use of software tools, and depends more on how the programs fit into the programming environment and how they can be used with other programs than on how they are designed internally. [...] This style was based on the use of tools: using programs separately or in combination to get a job done, rather than doing it by hand, by monolithic self-sufficient subsystems, or by special-purpose, one-time programs.

The authors contrast Unix tools such as cat, with larger program suites used by other systems.[7]

The design of cat is typical of most UNIX programs: it implements one simple but general function that can be used in many different applications (including many not envisioned by the original author). Other commands are used for other functions, for example, there are separate commands for file system tasks like renaming files, deleting them, or telling how big they are. Other systems instead lump these into a single "file system" command with an internal structure and command language of its own. (The PIP file copy program found on operating systems like CP/M or RSX-11 is an example.) That approach is not necessarily worse or better, but it is certainly against the UNIX philosophy.

McIlroy, then head of the Bell Labs Computing Sciences Research Center, and inventor of the Unix pipe,[8] summarized the Unix philosophy as follows:[1]

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

Beyond these statements, he has also emphasized simplicity and minimalism in Unix programming:[1]

The notion of "intricate and beautiful complexities" is almost an oxymoron. Unix programmers vie with each other for "simple and beautiful" honors — a point that's implicit in these rules, but is well worth making overt.

Conversely, McIlroy has criticized modern Linux as having software bloat, remarking that, "adoring admirers have fed Linux goodies into a disheartening state of obesity."[9] He contrasts this with earlier approach taken at Bell Labs when developing and revising Research Unix:[10]

Everything was small... and my heart sinks for Linux when I see the size of it. [...] The manual page, which really used to be a manual page, is now a small volume, with a thousand options... We used to sit around in the Unix Room saying, 'What can we throw out? Why is there this option?' It's often because there is some deficiency in the basic design — you didn't really hit the right design point. Instead of adding an option, think about what was forcing you to add that option.

As stated by McIlroy, and generally accepted throughout the Unix community, Unix programs have always been expected to follow the concept of DOTADIW, or "Do One Thing and Do It Well." There are limited sources for the acronym DOTADIW on the Internet, but it is discussed at length during the development and packaging of new operating systems, especially in the Linux community.

Patrick Volkerding, the project lead of Slackware Linux, invoked this design principle in a criticism of the systemd architecture, stating that, "attempting to control services, sockets, devices, mounts, etc., all within one daemon flies in the face of the UNIX concept of doing one thing and doing it well."[11]

Developers should build a program out of simple parts connected by well defined interfaces, so problems are local, and parts of the program can be replaced in future versions to support new features. This rule aims to save time on debugging code that is complex, long, and unreadable.

Rule of Clarity

Developers should write programs as if the most important communication is to the developer who will read and maintain the program, rather than the computer. This rule aims to make code as readable and comprehensible as possible for whoever works on the code in the future.

Rule of Composition

Developers should write programs that can communicate easily with other programs. This rule aims to allow developers to break down projects into small, simple programs rather than overly complex monolithic programs.

Developers should separate the mechanisms of the programs from the policies of the programs; one method is to divide a program into a front-end interface and a back-end engine with which that interface communicates. This rule aims to prevent bug introduction by allowing policies to be changed with minimum likelihood of destabilizing operational mechanisms.

Rule of Simplicity

Developers should design for simplicity by looking for ways to break up program systems into small, straightforward cooperating pieces. This rule aims to discourage developers’ affection for writing “intricate and beautiful complexities” that are in reality bug prone programs.

Rule of Parsimony

Developers should avoid writing big programs. This rule aims to prevent overinvestment of development time in failed or suboptimal approaches caused by the owners of the program’s reluctance to throw away visibly large pieces of work. Smaller programs are not only easier to write, optimize, and maintain; they are easier to delete when deprecated.

Rule of Transparency

Developers should design for visibility and discoverability by writing in a way that their thought process can lucidly be seen by future developers working on the project and using input and output formats that make it easy to identify valid input and correct output. This rule aims to reduce debugging time and extend the lifespan of programs.

Rule of Robustness

Developers should design robust programs by designing for transparency and discoverability, because code that is easy to understand is easier to stress test for unexpected conditions that may not be foreseeable in complex programs. This rule aims to help developers build robust, reliable products.

Rule of Representation

Developers should choose to make data more complicated rather than the procedural logic of the program when faced with the choice, because it is easier for humans to understand complex data compared with complex logic. This rule aims to make programs more readable for any developer working on the project, which allows the program to be maintained.

Rule of Least Surprise

Developers should design programs that build on top of the potential users' expected knowledge; for example, ‘+’ in a calculator program should always mean 'addition'. This rule aims to encourage developers to build intuitive products that are easy to use.

Rule of Silence

Developers should design programs so that they do not print unnecessary output. This rule aims to allow other programs and developers to pick out the information they need from a program's output without having to parse verbosity.

Rule of Repair

Developers should design programs that fail in a manner that is easy to localize and diagnose or in other words “fail noisily”. This rule aims to prevent incorrect output from a program from becoming an input and corrupting the output of other code undetected.

Rule of Economy

Developers should value developer time over machine time, because machine cycles today are relatively inexpensive compared to prices in the 1970s. This rule aims to reduce development costs of projects.

Developers should prototype software before polishing it. This rule aims to prevent developers from spending too much time for marginal gains.

Rule of Diversity

Developers should design their programs to be flexible and open. This rule aims to make programs flexible, allowing them to be used in ways other than those their developers intended.

Rule of Extensibility

Developers should design for the future by making their protocols extensible, allowing for easy plugins without modification to the program's architecture by other developers, noting the version of the program, and more. This rule aims to extend the lifespan and enhance the utility of the code the developer writes.

In 1994, Mike Gancarz (a member of the team that designed the X Window System), drew on his own experience with Unix, as well as discussions with fellow programmers and people in other fields who depended on Unix, to produce The UNIX Philosophy which sums it up in 9 paramount precepts:

Richard P. Gabriel suggests that a key advantage of Unix was that it embodied a design philosophy he termed "worse is better", in which simplicity of both the interface and the implementation are more important than any other attributes of the system—including correctness, consistency, and completeness. Gabriel argues that this design style has key evolutionary advantages, though he questions the quality of some results.

For example, in the early days Unix used a monolithic kernel (which means that user processes carried out kernel system calls all on the user stack). If a signal was delivered to a process while it was blocked on a long-term I/O in the kernel, then what should be done? Should the signal be delayed, possibly for a long time (maybe indefinitely) while the I/O completed? The signal handler could not be executed when the process was in kernel mode, with sensitive kernel data on the stack. Should the kernel back-out the system call, and store it, for replay and restart later, assuming that the signal handler completes successfully?

In these cases Ken Thompson and Dennis Ritchie favored simplicity over perfection, the Unix system would occasionally return early from a system call with an error stating that it had done nothing—the "Interrupted System Call", or an error number 4 (EINTR) in today's systems. Of course the call had been aborted in order to call the signal handler, this could only happen for a handful of long-running system calls such as read(), write(), open(), and select(). On the plus side, this made the I/O system many times simpler to design and understand, the vast majority of user programs were never affected because they didn't handle or experience signals other than SIGINT and would die right away if one was raised. For the few other programs—things like shells or text editors that respond to job control key presses—small wrappers could be added to system calls so as to retry the call right away if this EINTR error was raised. Thus, the problem was solved in a simple manner.

In a 1981 article entitled "The truth about Unix: The user interface is horrid"[14] published in Datamation, Don Norman criticized the design philosophy of Unix for its lack of concern for the user interface. Writing from his background in cognitive science and from the perspective of the then-current philosophy of cognitive engineering,[4], he focused on how end users comprehend and form a personal cognitive model of systems--or, in the case of Unix, fail to understand, with the result that disastrous mistakes (such as losing an hour's worth of work) are all too easy.

1.
Ken Thompson
–
Kenneth Lane Ken Thompson, commonly referred to as ken in hacker circles, is an American pioneer of computer science. Having worked at Bell Labs for most of his career, Thompson designed and implemented the original Unix operating system. He also invented the B programming language, the predecessor to the C programming language. Since 2006, Thompson has worked at Google, where he co-invented the Go programming language, Thompson was born in New Orleans. When asked how he learned to program, Thompson stated, I was always fascinated with logic and even in grade school Id work on problems in binary. Thompson was hired by Bell Labs in 1966, in the 1960s at Bell Labs, Thompson and Dennis Ritchie worked on the Multics operating system. While writing Multics, Thompson created the Bon programming language, and he also created a video game called Space Travel. Later on Bell Labs withdrew from the MULTICS project, in order to go on playing the game, Thompson found an old PDP-7 machine and rewrote Space Travel on it. In 1970, Brian Kernighan suggested the name Unix, in a somewhat treacherous pun on the name Multics, after initial work on Unix, Thompson decided that Unix needed a system programming language and created B, a precursor to Ritchies C. In the 1960s, Thompson also began work on regular expressions, Thompson had developed the CTSS version of the editor QED, which included regular expressions for searching text. QED and Thompsons later editor ed contributed greatly to the popularity of regular expressions. Almost all programs that work with regular expressions today use some variant of Thompsons notation and he also invented Thompsons construction algorithm used for converting regular expression into nondeterministic finite automaton in order to make expression matching faster. Then there was a rewrite in a language that would come to be called C. He worked mostly on the language and on the I/O system and that was for the PDP-11, which was serendipitous, because that was the computer that took over the academic community. Feedback from Thompsons Unix development was instrumental in the development of the C programming language. Thompson would later say that the C language grew up one of the rewritings of the system and, as such. In 1975, Thompson took a sabbatical from Bell Labs and went to his alma mater, there, he helped to install Version 6 Unix on a PDP-11/70. Unix at Berkeley would later become maintained as its own system, along with Joseph Condon, Thompson created the hardware and software for Belle, a world champion chess computer

2.
Dennis Ritchie
–
Dennis MacAlistair Ritchie was an American computer scientist. He created the C programming language and, with long-time colleague Ken Thompson, Ritchie and Thompson received the Turing Award from the ACM in 1983, the Hamming Medal from the IEEE in 1990 and the National Medal of Technology from President Bill Clinton in 1999. Ritchie was the head of Lucent Technologies System Software Research Department when he retired in 2007 and he was the R in K&R C, and commonly known by his username dmr. Dennis Ritchie was born in Bronxville, New York and his father was Alistair E. Ritchie, a longtime Bell Labs scientist and co-author of The Design of Switching Circuits on switching circuit theory. As a child, Dennis moved with his family to Summit, New Jersey and he graduated from Harvard University with degrees in physics and applied mathematics. However, Ritchie never officially received his PhD degree, during the 1960s, Ritchie and Ken Thompson worked on the Multics operating system at Bell Labs. However, Bell Labs pulled out of the project in 1969, Thompson then found an old PDP-7 machine and developed his own application programs and operating system from scratch, aided by Ritchie and others. In 1970, Brian Kernighan suggested the name Unix, a pun on the name Multics, to supplement assembly language with a system-level programming language, Thompson created B. Later, B was replaced by C, created by Ritchie, during the 1970s, Ritchie collaborated with James Reeds and Robert Morris on a ciphertext-only attack on the M-209 US cipher machine that could solve messages of at least 2000–2500 letters. Ritchie relates that, after discussions with the NSA, the decided not to publish it. Ritchie was also involved with the development of the Plan 9 and Inferno operating systems, and they were so influential on Research Unix that Doug McIlroy later wrote, The names of Ritchie and Thompson may safely be assumed to be attached to almost everything not otherwise attributed. Ritchie liked to emphasize that he was just one member of a group and he suggested that many of the improvements he introduced simply looked like a good thing to do, and that anyone else in the same place at the same time might have done the same thing. But Bjarne Stroustrup who designed C++ said If Dennis had decided to spend that decade on esoteric math, nowadays, the C language is widely used today in application, operating system, and embedded system development, and its influence is seen in most modern programming languages. Unix has also been influential, establishing computing concepts and principles that have been widely adopted, in the same interview, he stated that he viewed both Unix and Linux as the continuation of ideas that were started by Ken and me and many others, many years ago. In 1983, Ritchie and Thompson received the Turing Award for their development of operating systems theory. Ritchies Turing Award lecture was titled Reflections on Software Research, in 1997, both Ritchie and Thompson were made Fellows of the Computer History Museum, for co-creation of the UNIX operating system, and for development of the C programming language. In 2011, Ritchie, along with Thompson, was awarded the Japan Prize for Information, Ritchie was found dead on October 12,2011, at the age of 70 at his home in Berkeley Heights, New Jersey, where he lived alone. First news of his death came from his colleague, Rob Pike

3.
Software development
–
Software development is the process of computer programming, documenting, testing, and bug fixing involved in creating and maintaining applications and frameworks resulting in a software product. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, system software underlies applications and the programming process itself, and is often developed separately. There are many approaches to project management, known as software development life cycle models, methodologies, processes. The waterfall model is a version, contrasted with the more recent innovation of agile software development. A software development process is a framework that is used to structure, plan, a wide variety of such frameworks has evolved over the years, each with its own recognized strengths and weaknesses. One system development methodology is not necessarily suitable for use by all projects, each of the available methodologies is best suited to specific kinds of projects, based on various technical, organizational, project and team considerations. Different approaches to development may carry out these stages in different orders. The level of detail of the produced at each stage of software development may also vary. These stages may also be carried out in turn, or they may be repeated over various cycles or iterations, the more extreme approach usually involves less time spent on planning and documentation, and more time spent on coding and development of automated tests. More “extreme” approaches also promote continuous testing throughout the development lifecycle, there are significant advantages and disadvantages to the various methodologies, and the best approach to solving a problem using software will often depend on the type of problem. If the problem is understood and a solution can be effectively planned out ahead of time. If, on the hand, the problem is unique. The sources of ideas for software products are plenteous, in a marketing evaluation phase, the cost and time assumptions become evaluated. A decision is reached early in the first phase as to whether, based on the detailed information generated by the marketing and development staff. Students of marketing learn marketing and are exposed to finance or engineering. Most of us become specialists in just one area, to complicate matters, few of us meet interdisciplinary people in the workforce, so there are few roles to mimic. Yet, software product planning is critical to the development success and these processes may also cause the role of business development to overlap with software development. Planning is an objective of each and every activity, where we want to discover things that belong to the project, an important task in creating a software program is extracting the requirements or requirements analysis

4.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers

5.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing

6.
Douglas McIlroy
–
Malcolm Douglas McIlroy is a mathematician, engineer, and programmer. As of 2007 he is an Adjunct Professor of Computer Science at Dartmouth College, McIlroy is best known for having originally developed Unix pipelines, software componentry and several Unix tools, such as spell, diff, sort, join, graph, speak, and tr. His seminal work on software componentization makes him a pioneer of component-based software engineering and he taught at MIT from 1954 to 1958. McIlroy joined Bell Laboratories in 1958, from 1965 to 1986 was head of its Computing Techniques Research Department, from 1967 to 1968, McIlroy also served as a visiting lecturer at Oxford University. In 1997, McIlroy retired from Bell Labs, and took a position as an Adjunct Professor in the Dartmouth College Computer Science Department, McIlroy is a member of the National Academy of Engineering, and has won both the USENIX Lifetime Achievement Award and its Software Tools award. He also served on the committee of CSNET. Those types are not abstract, they are as real as int, as a programmer, it is your job to put yourself out of business. What you do today can be automated tomorrow, keep it simple, make it general, and make it intelligible. The real hero of programming is the one who writes negative code

7.
Peter H. Salus
–
Peter H. Salus is a linguist, computer scientist, historian of technology, author in many fields, and an editor of books and journals. He has conducted research in germanistics, language acquisition, and computer languages and he has a 1963 PhD in Linguistics from New York University. After an intense academic career serving as professor and dean at several universities, from 1987 to 1996, he was Managing Editor of the technical journal Computing Systems. He is best known for his books on the history of computing, particularly A Quarter Century of UNIX, völuspá, The Song of the Sybil On Language Plato to Humboldt For W. H

8.
The Pragmatic Programmer
–
The Pragmatic Programmer, From Journeyman to Master is a book about software engineering by Andrew Hunt and David Thomas, published in October,1999. The book is the first in a series of books under the The Pragmatic Bookshelf label, in the book, the idea of code katas is introduced which are small exercises. The exercises are used to practice programming skills, rubber duck debugging or rubber ducking is a method of debugging code whose name is a reference to a story in the book. Official website Pragmatic Programmer on CodingHorror

9.
Assembly language
–
Each assembly language is specific to a particular computer architecture. In contrast, most high-level programming languages are generally portable across multiple architectures, Assembly language may also be called symbolic machine code. Assembly language is converted into machine code by a utility program referred to as an assembler. The conversion process is referred to as assembly, or assembling the source code, Assembly time is the computational step where an assembler is run. Assembly language uses a mnemonic to represent each low-level machine instruction or opcode, typically also each architectural register, flag, depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate development, to control the assembly process. A macro assembler includes a facility so that assembly language text can be represented by a name. A cross assembler is an assembler that is run on a computer or operating system of a different type from the system on which the code is to run. Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, a microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer. A meta-assembler is a used in some circles for a program that accepts the syntactic and semantic description of an assembly language. An assembler program creates object code by translating combinations of mnemonics and syntax for operations and this representation typically includes an operation code as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations, the use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include facilities for performing textual substitution – e. g. to generate common short sequences of instructions as inline. Some assemblers may also be able to some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors, most of them are able to perform jump-instruction replacements in any number of passes, on request. Like early programming languages such as Fortran, Algol, Cobol and Lisp, assemblers have been available since the 1950s, however, assemblers came first as they are far simpler to write than compilers for high-level languages. There may be several assemblers with different syntax for a particular CPU or instruction set architecture, despite different appearances, different syntactic forms generally generate the same numeric machine code, see further below. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations, there are two types of assemblers based on how many passes through the source are needed to produce the executable program

10.
Joseph Henry Condon
–
Joseph Henry Joe Condon was an American computer scientist, engineer and physicist, who spent most of his career at Bell Labs. The son of Edward Condon and Emilie Honzik Condon, he was named after the 19th century American physicist Joseph Henry, Condon developed an interest in physics and electronics at an early age and credited his introduction to analytical thinking to an anonymous instrument maker. He attended Johns Hopkins University and received his BS degree in physics in 1958, after graduate school, Condon joined the Metallurgy Research Division of AT&T Bell Laboratories at Murray Hill, New Jersey. He arrived about the time that the division split. Formerly physics, metallurgy and chemistry were under one executive director, after the split, physics had its own director, and chemistry and metallurgy were under another. He worked for five years on solid-state physics and metals at low temperatures — electronic band structure of metals by means of the diamagnetic susceptibility. His studies in beryllium and silver showed that magnetic domains form in non-ferromagnetic metals when the differential magnetic susceptibility is greater than unity. He developed the theory and verified it experimentally, Condon then became interested more in electronics engineering, moving out of physics. He was exposed to UNIX on the Honeywell 516 machines in the early 1970s, in the 1960s, Condon contributed to the development of local area network digital telephone switching. Condon and Ken Thompson promoted the use of the C programming language for AT&T’s switching system control programs, Condon acquired a small AT&T PBX that handled about 50 phones, he made the necessary hardware changes and Thompson wrote the necessary software programs. In 1975 Condon joined the Computer Research Center at Bell Labs where the C programming language, in collaboration with Thompson, Condon created the chess-playing machine Belle. Condon designed custom hardware while Ken designed software, in Condons obituary, Physics Today called his work on the spin glass machine a classic that remain accurate to this day, despite immense increases in computing power. Condon retired in 1989 but continued to consult with Bell Labs for another 10 years, Condon died on 2 January 2012. His designs were said to be parsimonous and his personal interests included American Indian crafts, classical music, the theater and to travel with his wife Carol in their RV. He was very Quaker and a frequent volunteer in the FISH Hospitality Program, a local charity providing shelter for homeless people and single mothers

11.
Pipeline (Unix)
–
In Unix-like computer operating systems, a pipeline is a sequence of processes chained together by their standard streams, so that the output of each process feeds directly as input to the next one. The concept of pipelines was championed by Douglas McIlroy at Unixs ancestral home of Bell Labs, during the development of Unix and it is named by analogy to a physical pipeline. The standard shell syntax for pipelines is to list multiple commands, each process takes input from the previous process and produces output for the next process via standard streams. Pipes are unidirectional, data flows through the pipeline from left to right, all widely used Unix shells have a special syntax construct for the creation of pipelines. In all usage one writes the commands in sequence, separated by the ASCII vertical bar character |, the shell starts the processes and arranges for the necessary connections between their standard streams. By default, the standard streams of the processes in a pipeline are not passed on through the pipe, instead. However, many shells have additional syntax for changing this behavior, in the csh shell, for instance, using |& instead of | signifies that the standard error stream should also be merged with the standard output and fed to the next process. The Bourne Shell can also merge standard error, using 2>&1, in the most commonly used simple pipelines the shell connects a series of sub-processes via pipes, and executes external commands within each sub-process. Thus the shell itself is doing no direct processing of the data flowing through the pipeline, however, its possible for the shell to perform processing directly, using a so-called mill, or pipemill. There are a couple of ways to avoid this behavior. First, some support an option to disable reading from stdin. Alternatively, if the drain does not need to read any input from stdin to do something useful, pipelines can be created under program control. The Unix pipe system call asks the operating system to construct a new anonymous pipe object and this results in two new, opened file descriptors in the process, the read-only end of the pipe, and the write-only end. The pipe ends appear to be normal, anonymous file descriptors, to avoid deadlock and exploit parallelism, the Unix process with one or more new pipes will then, generally, call fork to create new processes. Each process will then close the end of the pipe that it not be using before producing or consuming any data. Alternatively, a process might create a new thread and use the pipe to communicate between them, named pipes may also be created using mkfifo or mknod and then presented as the input or output file to programs as they are invoked. They allow multi-path pipes to be created, and are effective when combined with standard error redirection. Instead, the output of the program is held in the buffer

12.
Standard streams
–
In computer programming, standard streams are preconnected input and output communication channels between a computer program and its environment when it begins execution. The three I/O connections are called standard input, standard output and standard error, originally I/O happened via a physically connected system console, but standard streams abstract this. When a command is executed via a shell, the streams are typically connected to the text terminal on which the shell is running. More generally, a process will inherit the standard streams of its parent process. Users generally know standard streams as input and output channels that handle data coming from an input device, the data may be text with any encoding, or binary data. Streams may be used to chain applications, meaning the output of a program is used for input to another application, in many operating systems this is expressed by listing the application names, separated by the vertical bar character, for this reason often called the pipeline character. A well-known example is the use of an application, such as more. In most operating systems predating Unix, programs had to connect to the appropriate input and output devices. OS-specific intricacies caused this to be a programming task. One of Unixs several groundbreaking advances was abstract devices, which removed the need for a program to know or care what kind of devices it was communicating with, older operating systems forced upon the programmer a record structure and frequently non-orthogonal data semantics and device control. Unix eliminated this complexity with the concept of a data stream, a program may also write bytes as desired and need not declare how many there will be, or how they will be grouped. Another Unix breakthrough was to automatically associate input and output by default — the program did nothing to establish input and output for a typical input-process-output program. In contrast, previous operating systems usually required some—often complex—job control language to establish connections, since Unix provided standard streams, the Unix C runtime environment was obliged to support it as well. As a result, most C runtime environments, regardless of the operating system, standard input is stream data going into a program. The program requests data transfers by use of the read operation, not all programs require stream input. For example, the dir and ls programs may take command-line arguments, unless redirected, standard input is expected from the keyboard which started the program. The file descriptor for standard input is 0, the POSIX <unistd. h> definition is STDIN_FILENO, the corresponding <stdio. h> variable is FILE* stdin, similarly, standard output is the stream where a program writes its output data. The program requests data transfer with the write operation, for example, the file rename command is silent on success

13.
Research Unix
–
The term Research Unix first appeared in the Bell System Technical Journal to distinguish it from other versions internal to Bell Labs whose code-base had diverged from the primary CSRC version. However, that term was little-used until Version 8 Unix, but has been applied to earlier versions as well. Prior to V8, the system was most commonly called simply UNIX or the UNIX Time-Sharing System. AT&T licensed Version 5 to educational institutions, and Version 6 also to commercial sites, schools paid $200 and others $20,000, discouraging most commercial use, but Version 6 was the most widely used version into the 1980s. So, the first Research Unix would be the First Edition, another common way of referring to them is Version x Unix, where x is the manual edition. All modern editions of Unix—excepting Unix-like implementations such as Coherent, Minix, starting with the 8th Edition, versions of Research Unix had a close relationship to BSD. This began by using 4. 1cBSD as the basis for the 8th Edition. 1c and this continued with 9th and 10th. The ordinary user command-set was, I guess, a bit more BSD-flavored than SysVish, Version 3, Version 4 and Version 5 should not be confused with the UNIX3.0, UNIX4.0 and UNIX5.0 releases by the AT&T UNIX Support Group. After Version 10, Unix development at Bell Labs was stopped in favor of a system, Plan 9 from Bell Labs. In 2002, Caldera International released V7 Unix as FOSS under a permissive BSD-like software license, in 2017, Unix Heritage Society and Alcatel-Lucent USA Inc. List of new features in Research Unix 9th Edition

14.
Uniq
–
Uniq is a Unix utility which, when fed a text file, outputs the file with adjacent identical lines collapsed to one. First appearing in Version 3 Unix, it is a kind of filter program, typically it is used after sort. It can also only the duplicate lines, or add the number of occurrences of each line. An example, To see the list of lines in a file, sorted by the number of each occurs. List of Unix programs uniqs Linux manpage SourceForge UnxUtils – Port of several GNU utilities to Windows

15.
Stream (computing)
–
In computer science, a stream is a sequence of data elements made available over time. A stream can be thought of as items on a belt being processed one at a time rather than in large batches. Streams are processed differently from batch data – normal functions cannot operate on streams as a whole, as they have potentially unlimited data, and formally, streams are codata, not data. Functions that operate on a stream, producing another stream, are known as filters, filters may operate on one item of a stream at a time, or may base an item of output on multiple items of input, such as a moving average. The term stream is used in a number of ways, Stream editing, as with sed, awk. Stream editing processes a file or files, in-place, without having to load the file into a user interface, one example of such use is to do a search and replace on all the files in a directory, from the command line. On Unix and related systems based on the C language, a stream is a source or sink of data, Streams are an abstraction used when reading or writing files, or communicating over network sockets. The standard streams are three streams made available to all programs, i/O devices can be interpreted as streams, as they produce or consume potentially unlimited data over time. In object-oriented programming, input streams are generally implemented as iterators, in the Scheme language and some others, a stream is a lazily evaluated or delayed sequence of data elements. A stream can be used similarly to a list, but later elements are calculated when needed. Streams can therefore represent infinite sequences and series, in the Smalltalk standard library and in other programming languages as well, a stream is an external iterator. As in Scheme, streams can represent finite or infinite sequences, Stream processing — in parallel processing, especially in graphic processing, the term stream is applied to hardware as well as software. There it defines the flow of data that is processed in a dataflow programming language as soon as the program state meets the starting condition of the stream. Streams can be used as the data type for channels in interprocess communication. The term stream is applied to file system forks, where multiple sets of data are associated with a single filename. Most often, there is one main stream that makes up the file data. Bitstream Codata Data stream Data stream mining Flow Streaming algorithm Streaming media Stream processing An Approximate L1-Difference Algorithm for Massive Data Streams,1995 Feigenbaum et al

16.
Rob Pike
–
Robert Rob Pike is a Canadian programmer and author. He also co-developed the Blit graphical terminal for Unix, before that he wrote the first window system for Unix in 1981. Pike is the sole inventor named in AT&Ts US patent 4,555,775 or backing store patent that is part of the X graphic system protocol and one of the first software patents. Over the years Pike has written many text editors, sam and acme are the most well known and are still in active use, Pike, with Brian Kernighan, is the co-author of The Practice of Programming and The Unix Programming Environment. With Ken Thompson he is the co-creator of UTF-8, Pike also developed lesser systems such as the vismon program for displaying images of faces of email authors. Pike also appeared once on Late Night with David Letterman, as an assistant to the comedy duo Penn & Teller. Pike is married to Renée French, and currently works for Google, the Plan 9 from Bell Labs operating system

17.
Brian Kernighan
–
Brian Wilson Kernighan is a Canadian computer scientist who worked at Bell Labs alongside Unix creators Ken Thompson and Dennis Ritchie and contributed to the development of Unix. He is also coauthor of the AWK and AMPL programming languages, the K of K&R C and the K in AWK both stand for Kernighan. Since 2000 Brian Kernighan has been a Professor at the Computer Science Department of Princeton University, born in Toronto, Kernighan attended the University of Toronto between 1960 and 1964, earning his Bachelors degree in engineering physics. He received his PhD in electrical engineering from Princeton University in 1969 for research supervised by Peter Weiner, Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called Computers in Our World, kernighans name became widely known through co-authorship of the first book on the C programming language with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language and he authored many Unix programs, including ditroff. In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems, graph partitioning and the travelling salesman problem, in a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan was the editor for Prentice Hall International. His Software Tools series spread the essence of C/Unix thinking with makeovers for BASIC, FORTRAN, and Pascal and he has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term Unix and helped popularize Thompsons Unix philosophy, Kernighan is also known as a coiner of the expression What You See Is All You Get, which is a sarcastic variant of the original What You See Is What You Get. Kernighans term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts, kernighans original 1978 implementation of Hello, World. Was sold at The Algorithm Auction, the world’s first auction of computer algorithms, in 1996, Kernighan taught CS50 which is the Harvard University introductory course in Computer Science. His students on CS50 include David J. Malan who now runs the course, an Interview with Brian Kernighan — By Mihai Budiu, for PC Report Romania, August 2000 Transcript of an interview with Brian Kernighan. Archived from the original on 2009-04-28, archived from the original on 2009-05-28

18.
Bell Labs
–
Nokia Bell Labs is an American research and scientific development company, owned by Finnish company Nokia. Its headquarters are located in Murray Hill, New Jersey, in addition to laboratories around the rest of the United States. The historic laboratory originated in the late 19th century as the Volta Laboratory, Bell Labs was also at one time a division of the American Telephone & Telegraph Company, half-owned through its Western Electric manufacturing subsidiary. Eight Nobel Prizes have been awarded for work completed at Bell Laboratories, in 1880, the French government awarded Alexander Graham Bell the Volta Prize of 50,000 francs, approximately US$10,000 at that time for the invention of the telephone. Bell used the award to fund the Volta Laboratory in Washington, D. C. in collaboration with Sumner Tainter, the laboratory is also variously known as the Volta Bureau, the Bell Carriage House, the Bell Laboratory and the Volta Laboratory. The laboratory focused on the analysis, recording, and transmission of sound, Bell used his considerable profits from the laboratory for further research and education to permit the diffusion of knowledge relating to the deaf. This resulted in the founding of the Volta Bureau c,1887, located at Bells fathers house at 1527 35th Street in Washington, D. C. where its carriage house became their headquarters in 1889. In 1893, Bell constructed a new building, close by at 1537 35th St. specifically to house the lab, the building was declared a National Historic Landmark in 1972. In 1884, the American Bell Telephone Company created the Mechanical Department from the Electrical, the first president of research was Frank B. Jewett, who stayed there until 1940, ownership of Bell Laboratories was evenly split between AT&T and the Western Electric Company. Its principal work was to plan, design, and support the equipment that Western Electric built for Bell System operating companies and this included everything from telephones, telephone exchange switches, and transmission equipment. Bell Labs also carried out consulting work for the Bell Telephone Company, a few workers were assigned to basic research, and this attracted much attention, especially since they produced several Nobel Prize winners. Until the 1940s, the principal locations were in and around the Bell Labs Building in New York City. Of these, Murray Hill and Crawford Hill remain in existence, the largest grouping of people in the company was in Illinois, at Naperville-Lisle, in the Chicago area, which had the largest concentration of employees prior to 2001. Since 2001, many of the locations have been scaled down or closed. The Holmdel site, a 1.9 million square foot structure set on 473 acres, was closed in 2007, the mirrored-glass building was designed by Eero Saarinen. In August 2013, Somerset Development bought the building, intending to redevelop it into a commercial and residential project. The prospects of success are clouded by the difficulty of readapting Saarinens design and by the current glut of aging, eight Nobel Prizes have been awarded for work completed at Bell Laboratories

19.
Berkeley Software Distribution
–
Berkeley Software Distribution is a Unix operating system derivative developed and distributed by the Computer Systems Research Group of the University of California, Berkeley, from 1977 to 1995. Today the term BSD is often used non-specifically to refer to any of the BSD descendants which together form a branch of the family of Unix-like operating systems, operating systems derived from the original BSD code remain actively developed and widely used. Historically, BSD has been considered a branch of Unix, Berkeley Unix, because it shared the initial codebase, in the 1980s, BSD was widely adopted by vendors of workstation-class systems in the form of proprietary Unix variants such as DEC ULTRIX and Sun Microsystems SunOS. This can be attributed to the ease with which it could be licensed, FreeBSD, OpenBSD, NetBSD, Darwin, and PC-BSD. The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the system, allowing researchers at universities to modify. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project, also in 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system, graduate students Chuck Haley and Bill Joy improved Thompsons Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution, 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out, some 75 copies of 2BSD were sent out by Bill Joy. 2. 9BSD from 1983 included code from 4. 1cBSD, the most recent release,2. 11BSD, was first issued in 1992. As of 2008, maintenance updates from volunteers are still continuing, a VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAXs virtual memory capabilities. 3BSD was also alternatively called Virtual VAX/UNIX or VMUNIX, and BSD kernel images were normally called /vmunix until 4. 4BSD, 4BSD offered a number of enhancements over 3BSD, notably job control in the previously released csh, delivermail, reliable signals, and the Curses programming library. In a 1985 review of BSD releases, John Quarterman et al, many installations inside the Bell System ran 4. 1BSD. 4. 1BSD was a response to criticisms of BSDs performance relative to the dominant VAX operating system, the 4. 1BSD kernel was systematically tuned up by Bill Joy until it could perform as well as VMS on several benchmarks. Back at Bell Labs,4. 1cBSD became the basis of the 8th Edition of Research Unix, to guide the design of 4. The committee met from April 1981 to June 1983, apart from the Fast File System, several features from outside contributors were accepted, including disk quotas and job control. Sun Microsystems provided testing on its Motorola 68000 machines prior to release, the official 4. 2BSD release came in August 1983. On a lighter note, it marked the debut of BSDs daemon mascot in a drawing by John Lasseter that appeared on the cover of the printed manuals distributed by USENIX

20.
UNIX System V
–
UNIX System V is one of the first commercial versions of the Unix operating system. It was originally developed by AT&T and first released in 1983, four major versions of System V were released, numbered 1,2,3, and 4. It was the source of common commercial Unix features. System V is sometimes abbreviated to SysV, as of 2012, the Unix market is divided between three System V variants, IBMs AIX, Hewlett-Packards HP-UX and Oracles Solaris. System V was the successor to 1982s UNIX System III, while AT&T sold their own hardware that ran System V, most customers instead ran a version from a reseller, based on AT&Ts reference implementation. A standards document called the System V Interface Definition outlined the default features, in the 1980s and early-1990s, System V was considered one of the two major versions of UNIX, the other being the Berkeley Software Distribution. Historically, BSD was also commonly called BSD Unix or Berkeley Unix, the dispute had several levels, some technical and some cultural. The divide was roughly between longhairs and shorthairs, programmers and technical people tended to line up with Berkeley and BSD, more business-oriented types with AT&T and System V. While HP, IBM and others chose System V as the basis for their Unix offerings, other such as Sun Microsystems. Throughout its development, though, System V was infused with features from BSD, since the early 1990s, due to standardization efforts such as POSIX and the commercial success of Linux, the division between System V and BSD has become less important. System V, known inside Bell Labs as Unix 5.0, there was never an external release of Unix 4.0, which would have been System IV. This first release of System V was developed by AT&Ts UNIX Support Group, System V also included features such as the vi editor and curses from 4.1 BSD, developed at the University of California, Berkeley, it also improved performance by adding buffer and inode caches. It also added support for communication using messages, semaphores. SVR1 ran on DEC PDP-11 and VAX minicomputers, System V Release 2 was released in April,1984. It added shell functions and the SVID, new kernel features included record and file locking, demand paging, and copy on write. The concept of the base was formalized, and the DEC VAX-11/780 was chosen for this release. The porting base is the original version of a release. Educational source licenses for SVR2 were offered by AT&T for US$800 for the first CPU, a commercial source license was offered for $43,000, with three months of support, and a $16,000 price per additional CPU

21.
CP/M
–
Initially confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory, later versions of CP/M added multi-user variations and were migrated to 16-bit processors. The combination of CP/M and S-100 bus computers was loosely patterned on the MITS Altair and this computer platform was widely used in business through the late 1970s and into the mid-1980s. CP/M increased the size for both hardware and software by greatly reducing the amount of programming required to install an application on a new manufacturers computer. An important driver of innovation was the advent of low-cost microcomputers running CP/M, as independent programmers and hackers bought them. CP/M was displaced by MS-DOS soon after the 1981 introduction of the IBM PC, manufacturers of CP/M-compatible systems customized portions of the operating system for their own combination of installed memory, disk drives, and console devices. CP/M would also run on based on the Zilog Z80 processor since the Z80 was compatible with 8080 code. CP/M used the 7-bit ASCII set, the other 128 characters made possible by the 8-bit byte were not standardized. For example, one Kaypro used them for Greek characters, WordStar used the 8th bit as an end-of-word marker. The BIOS and BDOS were memory-resident, while the CCP was memory-resident unless overwritten by an application, a number of transient commands for standard utilities were also provided. The transient commands resided in files with the extension. COM on disk, the BIOS directly controlled hardware components other than the CPU and main memory. It contained functions such as input and output and the reading and writing of disk sectors. The BDOS implemented the CP/M file system and some input/output abstractions on top of the BIOS, the CCP took user commands and either executed them directly or loaded and started an executable file of the given name. Third-party applications for CP/M were also essentially transient commands, the BDOS, CCP and standard transient commands were the same in all installations of a particular revision of CP/M, but the BIOS portion was always adapted to the particular hardware. Adding memory to a computer, for example, meant that the CP/M system had to be reinstalled with an updated BIOS capable of addressing the additional memory, a utility was provided to patch the supplied BIOS, BDOS and CCP to allow them to be run from higher memory. Once installed, the system was stored in reserved areas at the beginning of any disk which would be used to boot the system. On start-up, the bootloader would load the system from the disk in drive A. By modern standards CP/M was primitive, owing to the constraints on program size. With version 1.0 there was no provision for detecting a changed disk, if a user changed disks without manually rereading the disk directory the system would write on the new disk using the old disks directory information, ruining the data stored on the disk

22.
RSX-11
–
RSX-11 is a discontinued family of real-time operating systems mainly for PDP-11 computers created by Digital Equipment Corporation, common in the late 1970s and early 1980s. RSX-11D first appeared on the PDP-11/40 in 1972 and it was designed for and much used in process control, but was also popular for program development. Henry Krejci was the leader for RSX-11D up to version 4. Dr Garth Wolfendale from the UK took over RSX11D development in 1972 to 1976 in Maynard, before moving to US he led a team to prototype an interactive layer, IAS based on the RSX11D OS. When he moved to US to take over RSX11D development, Andy Wilson became the leader in the UK. Ron McLean was the leader for RSX-20F/RSX10F a version of RSX11-D not RSX11-M as many suspected. This was a PDP10 front end, Garth Wolfendale was the project leader for RSX-11D from 1972–1976 and led the redesign and commercial release of the operating system as well as adding support for the 22-bit PDP-11/70 system. Dr. Wolfendale, originally from the UK, set up the team designed and prototyped IAS in the UK. Andy Wilson then led the development and release of the IAS system. Dave Cutler was the leader for RSX-11M, which was an adaptation of the earlier RSX-11D for a smaller memory footprint. Principles first tried in RSX-11M later appeared in DECs VMS and this lineage is made clear in Cutlers foreword to Inside Windows NT by Helen Custer. RSX-11 existed in many versions, RSX-11A, C – small paper tape real time executives, RSX-11B – small real time executive based on RSX-11C with support for disk I/O. To start up the system, first DOS-11 was booted, RSX-11B programs used DOS-11 macros to perform disk I/O. RSX-11D – a multiuser disk-based system, IAS – a timesharing-oriented variant of RSX-11D released at about the same time as the PDP-11/70. The first version of RSX to include DCL, which was known as PDS. RSX-11M – a multiuser version that was popular on all PDP-11s, rSX-11S – a memory-resident version of RSX-11M used in embedded real-time applications. RSX-11S applications were developed under RSX-11M, rSX-20F – PDP-11/40 front end processor operating system for the DEC KL10 processor. P/OS – A version of RSX-11M-Plus that was targeted to the DEC Professional line of PDP-11-based personal computers, dOS/RV, Russian, ОСРВ-СМ – Two names for the clandestine clone of RSX-11M that was produced in the Socialist bloc

23.
Minimalism
–
In visual arts, music, and other mediums, minimalism is a style that uses pared-down design elements. Minimalism began in post–World War II Western art, most strongly with American visual arts in the 1960s, prominent artists associated with minimalism include Donald Judd, John McCracken, Agnes Martin, Dan Flavin, Robert Morris, Anne Truitt, and Frank Stella. It derives from the reductive aspects of modernism and is interpreted as a reaction against abstract expressionism. Minimalism in music often features repetition and iteration such as those of the compositions of La Monte Young, Terry Riley, Steve Reich, Philip Glass, the term minimalist often colloquially refers to anything that is spare or stripped to its essentials. It has accordingly been used to describe the plays and novels of Samuel Beckett, the films of Robert Bresson, the stories of Raymond Carver, and the automobile designs of Colin Chapman. The word was first used in English in the early 20th century to describe a 1913 composition by the Russian painter Kasimir Malevich of a square on a white ground. Guggenheim Museum curated by Lawrence Alloway also in 1966 that showcased Geometric abstraction in the American art world via Shaped canvas, Color Field, in the wake of those exhibitions and a few others the art movement called minimal art emerged. Minimal art is inspired in part by the paintings of Barnett Newman, Ad Reinhardt, Josef Albers, and the works of artists as diverse as Pablo Picasso, Marcel Duchamp, Giorgio Morandi. Minimalism was also a reaction against the painterly subjectivity of Abstract Expressionism that had been dominant in the New York School during the 1940s and 1950s. The philosopher or art historian who can envision me—or anyone at all—arriving at aesthetic judgments in this way reads shockingly more into himself or herself than into my article. They very explicitly stated that their art was not about self-expression, in general, minimalisms features included geometric, often cubic forms purged of much metaphor, equality of parts, repetition, neutral surfaces, and industrial materials. Robert Morris, a theorist and artist, wrote a three part essay, Notes on Sculpture 1-3, originally published across three issues of Artforum in 1966. In these essays, Morris attempted to define a conceptual framework and formal elements for himself and these essays paid great attention to the idea of the gestalt - parts. Bound together in such a way that create a maximum resistance to perceptual separation. Morris later described an art represented by a marked lateral spread, the general shift in theory of which this essay is an expression suggests the transition into what would later be referred to as postminimalism. Stellas decisions about structures on the front surface of the canvas were therefore not entirely subjective, in the show catalog, Carl Andre noted, Art excludes the unnecessary. Frank Stella has found it necessary to paint stripes, there is nothing else in his painting. Because of a tendency in art to exclude the pictorial, illusionistic and fictive in favor of the literal, there was a movement away from painterly

24.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed‍—‌commercially or non-commercially‍—‌by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000

25.
Man page
–
A man page is a form of software documentation usually found on a Unix or Unix-like operating system. Topics covered include computer programs, formal standards and conventions, a user may invoke a man page by issuing the man command. By default, man uses a terminal pager program such as more or less to display its output. To read a page for a Unix command, a user can type, Pages are traditionally referred to using the notation name, for example. The same page name may appear in more than one section of the manual, such as when the names of system calls, user commands, examples are man and man, or exit and exit. The syntax for accessing the non-default manual section varies between different man implementations, on Solaris and illumos, for example, the syntax for reading printf is, On Linux and BSD derivatives the same invocation would be, which searches for printf in section 3 of the man pages. In the first two years of the history of Unix, no documentation existed, the Unix Programmers Manual was first published on November 3,1971. The first actual man pages were written by Dennis Ritchie and Ken Thompson at the insistence of their manager Doug McIlroy in 1971. Aside from the man pages, the Programmers Manual also accumulated a set of papers, some of them tutorials. Later versions of the documentation imitated the first man pages terseness, Ritchie added a How to get started section to the Third Edition introduction, and Lorinda Cherry provided the Purple Card pocket reference for the Sixth and Seventh Editions. Versions of the software were named after the revision of the manual, for the Fourth Edition the man pages were formatted using the troff typesetting package and its set of -man macros. At the time, the availability of online documentation through the manual system was regarded as a great advance. The modern descendants of 4. 4BSD also distribute man pages as one of the forms of system documentation. Few alternatives to man have enjoyed popularity, with the possible exception of GNU Projects info system. In addition, some Unix GUI applications now provide end-user documentation in HTML, man pages are usually written in English, but translations into other languages may be available on the system. The default format of the man pages is troff, with either the macro package man or mdoc and this makes it possible to typeset a man page into PostScript, PDF, and various other formats for viewing or printing. Most Unix systems have a package for the command, which enables users to browse their man pages using an html browser. A consequence of this is that section 8 is sometimes relegated to the 1M subsection of the main commands section, Some subsection suffixes have a general meaning across sections, Some versions of man cache the formatted versions of the last several pages viewed

26.
Patrick Volkerding
–
Patrick Volkerding is the founder and maintainer of the Slackware Linux distribution. Volkerding is Slackwares Benevolent Dictator for Life, and is known informally as The Man. Volkerding earned a Bachelor of Science in computer science from Minnesota State University Moorhead in 1993, Volkerding is a Deadhead, and even by April 1994 he had already attended 75 concerts. Volkerding is a Church of the SubGenius affiliate/member, the use of the word slack in Slackware is a homage to J. R. Bob Dobbs. About the SubGenius influence on Slackware, Volkerding has stated, Ill admit that it was SubGenius inspired, in fact, back in the 2.0 through 3.0 days we used to print a dobbshead on each CD. Volkerding is an avid homebrewer and beer lover, early versions of Slackware would entreat users to send him a bottle of local beer in appreciation for his work. For a short while, Chris Lumens and others assisted with his work on Slackware, due to the lack of a continuing revenue stream following the sale of his publisher, Walnut Creek CDROM, to BSDi, these people had to be let go. For the last several years Patrick Volkerding has managed Slackware with the help of volunteers and testers

27.
Slackware Linux
–
Slackware is a Linux distribution created by Patrick Volkerding in 1993. Slackware aims for design stability and simplicity and to be the most Unix-like Linux distribution and it makes as few modifications as possible to software packages from upstream and tries not to anticipate use cases or preclude user decisions. In contrast to most modern Linux distributions, Slackware provides no graphical installation procedure and it uses plain text files and only a small set of shell scripts for configuration and administration. Without further modification it boots into a command-line interface environment, because of its many conservative and simplistic features, Slackware is often considered to be most suitable for advanced and technically inclined Linux users. Slackware is available for the IA-32 and x86-64 architectures, with a port to the ARM architecture, while Slackware is mostly free and open source software, it does not have a formal bug tracking facility or public code repository, with releases periodically announced by Volkerding. There is no formal procedure for developers and Volkerding is the primary contributor to releases. The name Slackware stems from the fact that the distribution started as a side project with no intended commitment. To prevent it from being taken too seriously at first, Volkerding gave it a humorous name, Slackware refers to the pursuit of Slack, a tenet of the Church of the Subgenius. Certain aspects of Slackware graphics reflect this — the pipe which Tux is smoking, a humorous reference to the Church of the Subgenius can be found in many versions of the install. end text files, which indicate the end of a software series to the setup program. In recent versions, including Slackware release 14.1, the text is ROT13 obfuscated, Patrick Volkerding started with SLS after needing a LISP interpreter for a school project at the then named Moorhead State University. He found CLISP was available for Linux and downloaded SLS to run it, a few weeks later, Volkerding was asked by his artificial intelligence professor at MSU to show him how to install Linux at home and on some of the computers at school. Volkerding had made notes describing fixes to issues he found after installing SLS and he and his professor went through and applied those changes to a new installation. However, this took almost as long as it took to just install SLS and this was the start of Slackware. Volkerding continued making improvements to SLS, fixing bugs, upgrading software, automatic installation of shared libraries and the image, fixing file permissions. In a short time, Volkerding had upgraded around half the packages beyond what SLS had available, Volkerding had no intentions to provide his modified SLS version for the public. During that time, many SLS users on the internet were asking SLS for a new release, to which he received many positive responses. After a discussion with the local sysadmin at MSU, Volkerding obtained permission to upload Slackware to the universitys FTP server. This first Slackware release, version 1.00, was distributed on 17 July 1993 at 00,16,36, after the announcement was made, Volkerding watched as the flood of FTP connections continually crashed the server

28.
Systemd
–
Systemd is an init system used in Linux distributions to bootstrap the user space and manage all processes subsequently, instead of the UNIX System V or Berkeley Software Distribution init systems. It is published as free and open-source software under the terms of the GNU Lesser General Public License version 2.1 or later, one of systemds main goals is to unify basic Linux configurations and service behaviors across all distributions. As of 2015, a number of Linux distributions have followed their parent Linux distributions such as Red Hat to adopt systemd as their default init system. The name systemd adheres to the Unix convention of naming daemons by appending the letter d and it is also a wordplay on the term System D, which refers to a persons ability to adapt quickly and improvise to solve problems. Lennart Poettering and Kay Sievers, the engineers working for Red Hat who initially developed systemd. Poettering describes systemd development as never finished, never complete, in January 2013, Poettering described systemd not as one program, but rather a large software suite that includes 69 individual binaries. As an integrated software suite, systemd replaces the startup sequences and runlevels controlled by the init daemon. Systemd also integrates many other services that are common on Linux systems by handling user logins, like the init daemon, systemd is a daemon that manages other daemons, which, including systemd itself, are background processes. Systemd is the first daemon to start during booting and the last daemon to terminate during shutdown. The systemd daemon serves as the root of the user spaces process tree, systemd executes elements of its startup sequence in parallel, which is faster than the traditional startup sequences sequential approach. For inter-process communication, systemd makes Unix domain sockets and D-Bus available to the running daemons, the state of systemd itself can also be preserved in a snapshot for future recall. Systemd records initialization instructions for each daemon in a file that uses a declarative language. Unit file types include service, socket, device, mount, automount, swap, target, path, timer, snapshot, slice, systemctl may be used to introspect and control the state of the systemd system and service manager. Systemd-analyze may be used to determine system performance statistics and retrieve other state and tracing information from the system. Systemd tracks processes using the Linux kernels cgroups subsystem instead of using process identifiers, thus, daemons cannot escape systemd, systemd not only uses cgroups, but also augments them with systemd-nspawn and machinectl, two utility programs that facilitate the creation and management of Linux containers. Since version 205, systemd also offers ControlGroupInterface, which is an API to the Linux kernel cgroups, the Linux kernel cgroups are adapted to support kernfs, and are being modified to support a unified hierarchy. Its preview version was released in October 2014, as part of systemd version 217, systemd-consoled was removed from systemd on July 29,2015 by David Herrmann. Journald systemd-journald is a responsible for event logging, with append-only binary files serving as its logfiles

29.
Daemon (computing)
–
In multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the names of a daemon end with the letter d, for clarification that the process is, in fact, a daemon. For example, syslogd is the daemon that implements the system logging facility, in a Unix environment, the parent process of a daemon is often, but not always, the init process. In addition, a daemon launched by forking and exiting typically must perform other operations, such procedures are often implemented in various convenience routines such as daemon in Unix. Systems often start daemons at boot time which will respond to requests, hardware activity. Daemons such as cron may also perform defined tasks at scheduled times, the term was coined by the programmers of MITs Project MAC. They took the name from Maxwells demon, an imaginary being from an experiment that constantly works in the background. Maxwells Demon is consistent with Greek mythologys interpretation of a daemon as a supernatural being working in the background, however, BSD and some of its derivatives have adopted a Christian demon as their mascot rather than a Greek daemon. The word daemon is a spelling of demon, and is pronounced /ˈdiːmən/ DEE-mən. In the context of software, the original pronunciation /ˈdiːmən/ has drifted to /ˈdeɪmən/ DAY-mən for some speakers. Alternate terms for daemon are service, started task, and ghost job, after the term was adopted for computer use, it was rationalized as a backronym for Disk And Execution MONitor. Daemons which connect to a network are examples of network services. However, more commonly, a daemon may be any background process, executing as a background task by forking and exiting. This is required sometimes for the process to become a session leader and it also allows the parent process to continue its normal execution. Setting the root directory as the current working directory so that the process does not keep any directory in use that may be on a file system. Required files will be opened later, in the Microsoft DOS environment, daemon-like programs were implemented as terminate and stay resident software. On Microsoft Windows NT systems, programs called Windows services perform the functions of daemons and they run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. However, any Windows application can perform the role of a daemon, not just a service, on the classic Mac OS, optional features and services were provided by files loaded at startup time that patched the operating system, these were known as system extensions and control panels

30.
Eric S. Raymond
–
He wrote a guidebook for the Roguelike game NetHack. In the 1990s, he edited and updated the Jargon File, Raymond was born in Boston, Massachusetts, in 1957 and lived in Venezuela as a child. His family moved to Pennsylvania in 1971 and he has suffered from cerebral palsy since birth, his weakened physical condition motivated him to go into computing. Raymond began his career writing proprietary software, between 1980 and 1985. In 1990, noting that the Jargon File had not been maintained since about 1983, he adopted it, paul Dourish maintains an archived original version of the Jargon File, because, he says, Raymonds updates essentially destroyed what held it together. In 1996 Raymond took over development of the open-source email software popclient, soon after this experience, in 1997, he wrote the The Cathedral and the Bazaar, detailing his thoughts on open-source software development and why it should be done as openly as possible. The essay was based in part on his experience in developing Fetchmail and he first presented his thesis at the annual Linux Kongress on May 27,1997. He later expanded the essay into a book, The Cathedral, hahn would later describe the 1999 book as clearly influential. From the late 1990s onward, due in part to the popularity of his essay and he co-founded the Open Source Initiative in 1998, taking on the self-appointed role of ambassador of open source to the press, business and public. He remains active in OSI, and stepped down as president of the initiative in February 2005, in 1998 Raymond received and published a Microsoft document expressing worry about the quality of rival open-source software. Eric named this document, together with others subsequently leaked, the Halloween Documents, in 2000–2002 he wrote a number of HOWTOs still included in the Linux Documentation Project. His personal archive also lists a number of non-technical and very early non-Linux FAQs, at this time he also created CML2, a source code configuration system, while originally intended for the Linux operating system, it was rejected by kernel developers. Raymond attributed this rejection to kernel list politics, Linus Torvalds on the other hand said in a 2007 mailing list post that as a matter of policy, the development team preferred more incremental changes. His 2003 book The Art of Unix Programming discusses user tools for programming, Raymond is currently the administrator of the project page for the GPS data tool gpsd. Also, some versions of NetHack include his guide and he has also contributed code and content to the free software video game The Battle for Wesnoth. Raymond coined an aphorism he dubbed Linus Law, inspired by Linus Torvalds, Given enough eyeballs and it first appeared in his book The Cathedral and the Bazaar. Raymond has had a number of disputes with other figures in the free software movement. As head of the Open Source Initiative, he argued that advocates should focus on the potential for better products